digitalpipelines's picture
Update README.md
f78c64d
|
raw
history blame
No virus
822 Bytes
metadata
license: apache-2.0
datasets:
  - digitalpipelines/wizard_vicuna_70k_uncensored

Overview

Fine-tuned OpenLLaMA-7B with an uncensored/unfiltered Wizard-Vicuna conversation dataset digitalpipelines/wizard_vicuna_70k_uncensored. Used QLoRA for fine-tuning using the process outlined in https://georgesung.github.io/ai/qlora-ift/

A quantized GPTQ model can be found at digitalpipelines/llama2_7b_chat_uncensored-GPTQ

Prompt style

The model was trained with the following prompt style:

### HUMAN:
Hello

### RESPONSE:
Hi, how are you?

### HUMAN:
I'm fine.

### RESPONSE:
How can I help you?
...