Locutusque bartowski commited on
Commit
33d8144
1 Parent(s): e9ab8a2

Add exllamav2 quant link (#5)

Browse files

- Add exllamav2 quant link (cdec4919e0aabf8f00f75485f81583c7470140f8)


Co-authored-by: Bartowski <bartowski@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -71,4 +71,8 @@ Hercules-3.0-Mistral-7B is fine-tuned from the following sources:
71
  - No model parameters were frozen.
72
  - This model was trained on OpenAI's ChatML prompt format. Because this model has function calling capabilities, the prompt format is slightly different, here's what it would look like: ```<|im_start|>system\n{message}<|im_end|>\n<|im_start|>user\n{user message}<|im_end|>\n<|im_start|>call\n{function call message}<|im_end|>\n<|im_start|>function\n{function response message}<|im_end|>\n<|im_start|>assistant\n{assistant message}</s>```
73
 
74
- This model was fine-tuned using the TPU-Alignment repository. https://github.com/Locutusque/TPU-Alignment
 
 
 
 
 
71
  - No model parameters were frozen.
72
  - This model was trained on OpenAI's ChatML prompt format. Because this model has function calling capabilities, the prompt format is slightly different, here's what it would look like: ```<|im_start|>system\n{message}<|im_end|>\n<|im_start|>user\n{user message}<|im_end|>\n<|im_start|>call\n{function call message}<|im_end|>\n<|im_start|>function\n{function response message}<|im_end|>\n<|im_start|>assistant\n{assistant message}</s>```
73
 
74
+ This model was fine-tuned using the TPU-Alignment repository. https://github.com/Locutusque/TPU-Alignment
75
+
76
+ # Quants
77
+
78
+ ExLlamaV2 by bartowski https://huggingface.co/bartowski/Hercules-3.0-Mistral-7B-exl2