winglian's picture
Update README.md
872723c
|
raw
history blame
854 Bytes
---
license: apache-2.0
datasets:
- vicgalle/alpaca-gpt4
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# openaccess-ai-collective/llama-13b-alpaca-wizard
## Trained
- `vicgalle/alpaca-gpt4` 1 epoch, learning rate 3e-5 https://wandb.ai/wing-lian/wizard-vicuna-gpt4/overview
- `deepspeed scripts/finetune.py configs/axolotl/wizard-vicuna-13b-step1.yml --deepspeed configs/ds_config.json --num_epochs 2 --warmup_steps 46 --logging_steps 1 --save_steps 23`
- `wizardlm` https://wandb.ai/wing-lian/wizard-vicuna-gpt4/runs/4y38knw4
- `deepspeed scripts/finetune.py configs/axolotl/wizard-vicuna-13b-step2.yml --deepspeed configs/ds_config-step2.json --num_epochs 2 --logging_steps 1`
- `vicuna` TBD
<pre>Brought to you by the OpenAccess AI Collective</pre>