winglian's picture
commit 1st epoch(0.96) of training with wizardlm data
1aa55d6
|
raw
history blame
798 Bytes
---
license: apache-2.0
datasets:
- vicgalle/alpaca-gpt4
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# Freedom-AI-Collective/llama-13b-alpaca-wizard-vicuna
## Trained
- `vicgalle/alpaca-gpt4` 1 epoch, learning rate 3e-5 https://wandb.ai/wing-lian/wizard-vicuna-gpt4/overview
- `deepspeed scripts/finetune.py configs/axolotl/wizard-vicuna-13b-step1.yml --deepspeed configs/ds_config.json --num_epochs 2 --warmup_steps 46 --logging_steps 1 --save_steps 23`
- `wizardlm` https://wandb.ai/wing-lian/wizard-vicuna-gpt4/runs/4y38knw4
- `deepspeed scripts/finetune.py configs/axolotl/wizard-vicuna-13b-step2.yml --deepspeed configs/ds_config-step2.json --num_epochs 2 --logging_steps 1`
- `vicuna` TBD
<pre>Brought to you by the Freedom AI Collective</pre>