Puma-3B / README.md
acrastt's picture
Update README.md
4c42e44
|
raw
history blame
No virus
661 Bytes
metadata
license: apache-2.0
datasets:
  - totally-not-an-llm/sharegpt-hyperfiltered-3k
language:
  - en
library_name: transformers
pipeline_tag: text-generation

This is OpenLLaMA 3B V2 finetuned on ShareGPT Hyperfiltered for 1 epochs.

Prompt template:

### HUMAN:
{prompt}

### RESPONSE:
<leave a newline for the model to answer>

q4_1 GGML quant available here.

Note: Don't expect this model to be good, I was just starting out to finetune. So don't roast me please!