Edit model card

First Version of Instruction Tuned Bloomz-7B1 model on Stanford Alpaca Instruction Tuning dataset (52k data) using HF Deepspeed

Base Model: bigscience/bloomz-7b1

Training Details :

  • Epochs: 4
  • Batch Size : 5 instantaneous per device x 3 gradient accumulation steps x 8 gpus = 120
  • Max Length : 1024
  • Weight Decay : 0
  • Learning Rate : 5e-5
  • Learning Rate Scheduler Type : Linear
  • Number of warmup steps : 40
  • Machine : 8xA100 80GB

Dataset Details :

Dataset : iamplus/Instruction_Tuning

Files :

  • stanford_alpaca_it.csv
Downloads last month
1

Dataset used to train manojpreveen/bloomz-7b1-stanford-alpaca-v1