digitalpipelines
commited on
Commit
•
f78c64d
1
Parent(s):
5fdd23f
Update README.md
Browse files
README.md
CHANGED
@@ -8,6 +8,8 @@ datasets:
|
|
8 |
Fine-tuned [OpenLLaMA-7B](https://huggingface.co/openlm-research/open_llama_7b) with an uncensored/unfiltered Wizard-Vicuna conversation dataset [digitalpipelines/wizard_vicuna_70k_uncensored](https://huggingface.co/datasets/digitalpipelines/wizard_vicuna_70k_uncensored).
|
9 |
Used QLoRA for fine-tuning using the process outlined in https://georgesung.github.io/ai/qlora-ift/
|
10 |
|
|
|
|
|
11 |
# Prompt style
|
12 |
The model was trained with the following prompt style:
|
13 |
```
|
|
|
8 |
Fine-tuned [OpenLLaMA-7B](https://huggingface.co/openlm-research/open_llama_7b) with an uncensored/unfiltered Wizard-Vicuna conversation dataset [digitalpipelines/wizard_vicuna_70k_uncensored](https://huggingface.co/datasets/digitalpipelines/wizard_vicuna_70k_uncensored).
|
9 |
Used QLoRA for fine-tuning using the process outlined in https://georgesung.github.io/ai/qlora-ift/
|
10 |
|
11 |
+
A quantized GPTQ model can be found at [digitalpipelines/llama2_7b_chat_uncensored-GPTQ](https://huggingface.co/digitalpipelines/llama2_7b_chat_uncensored-GPTQ)
|
12 |
+
|
13 |
# Prompt style
|
14 |
The model was trained with the following prompt style:
|
15 |
```
|