trollek commited on
Commit
6d1db1f
1 Parent(s): 177b371

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -1
README.md CHANGED
@@ -76,7 +76,7 @@ Here's the thing with the 2 extra layers compared to my first model. When I trai
76
 
77
  I have any model generate a bunch of prompts that a teacher model answers with gusto (the chosen column), and then have NinjaMouse2 also answer them (as the rejects). **BAM**. Skibidibi doo. Have I made these DPO datasets? No. But the prompts, their evaluations, along with responses of its own, responses from better models, and evaluations of both of them are included in the training. You can find the dataset [here](https://huggingface.co/datasets/trollek/Self-Rewarding-Mouse).
78
 
79
- ### Ollama
80
 
81
  I have [quantised this model](https://huggingface.co/trollek/NinjaMouse2-2.5B-v0.1-GGUF) and made it available through [LM Studio](https://lmstudio.ai/) and [Ollama](https://ollama.com/) in Q4KM and Q6K.
82
 
@@ -88,6 +88,13 @@ ollama run trollek/ninjamouse2:34l-q4_K_M
88
  ollama run trollek/ninjamouse2:34l-q6_K
89
  ```
90
 
 
 
 
 
 
 
 
91
 
92
  ## Notes
93
 
 
76
 
77
  I have any model generate a bunch of prompts that a teacher model answers with gusto (the chosen column), and then have NinjaMouse2 also answer them (as the rejects). **BAM**. Skibidibi doo. Have I made these DPO datasets? No. But the prompts, their evaluations, along with responses of its own, responses from better models, and evaluations of both of them are included in the training. You can find the dataset [here](https://huggingface.co/datasets/trollek/Self-Rewarding-Mouse).
78
 
79
+ ## Ollama
80
 
81
  I have [quantised this model](https://huggingface.co/trollek/NinjaMouse2-2.5B-v0.1-GGUF) and made it available through [LM Studio](https://lmstudio.ai/) and [Ollama](https://ollama.com/) in Q4KM and Q6K.
82
 
 
88
  ollama run trollek/ninjamouse2:34l-q6_K
89
  ```
90
 
91
+ ## Quantizations
92
+
93
+ [@cgus](https://huggingface.co/cgus) has done a great job with the quants. Reducing models from 16bit to ~2, and every bit inbetween, is Numberwang and much appreciated.
94
+
95
+ - **GGUF iMatrix:** [cgus](https://huggingface.co/cgus)/[NinjaMouse2-2.5B-v0.1-iMat-GGUF](https://huggingface.co/cgus/NinjaMouse2-2.5B-v0.1-iMat-GGUF)
96
+ - **Exllamav2:** [cgus](https://huggingface.co/cgus)/[NinjaMouse2-2.5B-v0.1-exl2](https://huggingface.co/cgus/NinjaMouse2-2.5B-v0.1-exl2)
97
+ - **GGUF:** trollek/[NinjaMouse2-2.5B-v0.1-GGUF](https://huggingface.co/trollek/NinjaMouse2-2.5B-v0.1-GGUF)
98
 
99
  ## Notes
100