InferenceIllusionist commited on
Commit
2a8999d
1 Parent(s): 56fc63f

Adding links to repo

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -40,8 +40,8 @@ Mistral-7b-02 base model was fine-tuned using the [RealWorldQA dataset](https://
40
  * This model lacks the conversational prose of Excalibur-7b-DPO and is much "drier" in tone
41
 
42
  <b>Requires additional mmproj file. You have two options for vision functionality (available inside this repo):</b>
43
- 1. [Quantized - Limited VRAM Option (197mb)](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO-GGUF/resolve/main/mistral-7b-mmproj-v1.5-Q4_1.gguf?download=true)
44
- 2. [Unquantized - Premium Option / Best Quality (596mb)](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO-GGUF/resolve/main/mmproj-model-f16.gguf?download=true)
45
 
46
  Select the gguf file of your choice in [Koboldcpp](https://github.com/LostRuins/koboldcpp/releases/) as usual, then make sure to choose the mmproj file above in the LLaVA mmproj field of the model submenu:
47
  <img src="https://i.imgur.com/x8vqH29.png" width="425"/>
 
40
  * This model lacks the conversational prose of Excalibur-7b-DPO and is much "drier" in tone
41
 
42
  <b>Requires additional mmproj file. You have two options for vision functionality (available inside this repo):</b>
43
+ 1. [Quantized - Limited VRAM Option (197mb)](https://huggingface.co/InferenceIllusionist/Mistral-RealworldQA-v0.2-7b-SFT/resolve/main/mistral-7b-mmproj-v1.5-Q4_1.gguf?download=true)
44
+ 2. [Unquantized - Premium Option / Best Quality (596mb)](https://huggingface.co/InferenceIllusionist/Mistral-RealworldQA-v0.2-7b-SFT/resolve/main/mmproj-model-f16.gguf?download=true)
45
 
46
  Select the gguf file of your choice in [Koboldcpp](https://github.com/LostRuins/koboldcpp/releases/) as usual, then make sure to choose the mmproj file above in the LLaVA mmproj field of the model submenu:
47
  <img src="https://i.imgur.com/x8vqH29.png" width="425"/>