Text Generation
English
sft
jordiclive's picture
Create README.md
d9791fd
|
raw
history blame
No virus
431 Bytes
---
license: mit
---
This repo contains a low-rank adapter for LLaMA-7b fit on `Nebulous/gpt4all_pruned`, `sahil2801/CodeAlpaca-20k`, `yahma/alpaca-cleaned` and some datasets part of the OpenAssistant project.
This version of the weights was trained with the following hyperparameters:
- Epochs: 2
- Batch size: 128
- Max Length: 2048
- Learning rate: 4e-6
- Lora _r_: 16
- Lora target modules: q_proj, k_proj, v_proj, o_proj