macadeliccc
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -69,6 +69,14 @@ Quatizations provided by [TheBloke](https://huggingface.co/TheBloke/laser-dolphi
|
|
69 |
+ GGUF chat available [here](https://huggingface.co/spaces/macadeliccc/laser-dolphin-mixtral-chat-GGUF)
|
70 |
+ 4-bit bnb chat available [here](https://huggingface.co/spaces/macadeliccc/laser-dolphin-mixtral-chat)
|
71 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
72 |
## Code Example
|
73 |
Switch the commented model definition to use in 4-bit. Should work with 9GB and still exceed the single 7B model by 5-6 points roughly
|
74 |
|
|
|
69 |
+ GGUF chat available [here](https://huggingface.co/spaces/macadeliccc/laser-dolphin-mixtral-chat-GGUF)
|
70 |
+ 4-bit bnb chat available [here](https://huggingface.co/spaces/macadeliccc/laser-dolphin-mixtral-chat)
|
71 |
|
72 |
+
# Ollama
|
73 |
+
|
74 |
+
```bash
|
75 |
+
ollama run macadeliccc/laser-dolphin-mixtral-2x7b-dpo
|
76 |
+
```
|
77 |
+
|
78 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6455cc8d679315e4ef16fbec/oVwa7Dwkt00tk8_MtlJdR.png)
|
79 |
+
|
80 |
## Code Example
|
81 |
Switch the commented model definition to use in 4-bit. Should work with 9GB and still exceed the single 7B model by 5-6 points roughly
|
82 |
|