metadata
license: apache-2.0
datasets:
- totally-not-an-llm/everything-sharegptformat-morecleaned
language:
- en
pipeline_tag: text-generation
This is OpenLLaMA 3B V2 finetuned on EverythingLM Data(ShareGPT format more cleaned) for 1 epochs.
Prompt template:
### HUMAN:
{prompt}
### RESPONSE:
<leave a newline for the model to answer>
q4_1 GGML quant here. All GGML quants available here.
Note: Don't expect this model to be good, I was just starting out to finetune. So don't roast me please!