Locutusque bartowski commited on
Commit
5812e6c
1 Parent(s): ba71761

Add ExLlamaV2 quant link (#2)

Browse files

- Add ExLlamaV2 quant link (b6758f74b312c71635861c52b0df7d534af6e30a)


Co-authored-by: Bartowski <bartowski@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -78,4 +78,8 @@ The bluemoon dataset was filtered from the training data as it showed to cause p
78
  - No model parameters were frozen.
79
  - This model was trained on OpenAI's ChatML prompt format. Because this model has function calling capabilities, the prompt format is slightly different, here's what it would look like: ```<|im_start|>system\n{message}<|im_end|>\n<|im_start|>user\n{user message}<|im_end|>\n<|im_start|>call\n{function call message}<|im_end|>\n<|im_start|>function\n{function response message}<|im_end|>\n<|im_start|>assistant\n{assistant message}</s>```
80
 
81
- This model was fine-tuned using the TPU-Alignment repository. https://github.com/Locutusque/TPU-Alignment
 
 
 
 
 
78
  - No model parameters were frozen.
79
  - This model was trained on OpenAI's ChatML prompt format. Because this model has function calling capabilities, the prompt format is slightly different, here's what it would look like: ```<|im_start|>system\n{message}<|im_end|>\n<|im_start|>user\n{user message}<|im_end|>\n<|im_start|>call\n{function call message}<|im_end|>\n<|im_start|>function\n{function response message}<|im_end|>\n<|im_start|>assistant\n{assistant message}</s>```
80
 
81
+ This model was fine-tuned using the TPU-Alignment repository. https://github.com/Locutusque/TPU-Alignment
82
+
83
+ # Quants
84
+
85
+ ExLlamaV2 by bartowski https://huggingface.co/bartowski/Hercules-3.1-Mistral-7B-exl2