File size: 298 Bytes
f1f3474 3d69359 |
1 2 3 4 5 6 7 8 9 10 11 |
---
license: openrail
---
Base model:
https://huggingface.co/WizardLM/WizardLM-13B-V1.2
Model trained on the following data:
https://huggingface.co/datasets/gmongaras/reddit_negative
Trained for about 600 steps with a batch size of 6, 3 accumulation steps, and using LoRA adapters on all layers. |