Base model: https://huggingface.co/WizardLM/WizardLM-13B-V1.2
Model trained on the following data: https://huggingface.co/datasets/gmongaras/reddit_negative
Trained for about 600 steps with a batch size of 6, 3 accumulation steps, and using LoRA adapters on all layers.
- Downloads last month
- 18
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.