Overview
Fine-tuned OpenLLaMA-7B with an uncensored/unfiltered Wizard-Vicuna conversation dataset digitalpipelines/wizard_vicuna_70k_uncensored. Used QLoRA for fine-tuning using the process outlined in https://georgesung.github.io/ai/qlora-ift/
- GPTQ quantized model can be found at digitalpipelines/llama2_7b_chat_uncensored-GPTQ
- GGML 2, 3, 4, 5, 6 and 8-bit quanitized models for CPU+GPU inference of digitalpipelines/llama2_7b_chat_uncensored-GGML
Prompt style
The model was trained with the following prompt style:
### HUMAN:
Hello
### RESPONSE:
Hi, how are you?
### HUMAN:
I'm fine.
### RESPONSE:
How can I help you?
...
- Downloads last month
- 16
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.