Edit model card

First Version of Instruction Tuned Bloomz-7B1 model on ChatGPT dataset (85k data) using HF Deepspeed

Base Model: bigscience/bloomz-7b1

Training Details :

  • Epochs: 5
  • Batch Size : 5 instantaneous per device x 2 gradient accumulation steps x 8 gpus = 80
  • Max Length : 512
  • Weight Decay : 0
  • Learning Rate : 5e-5
  • Learning Rate Scheduler Type : Linear
  • Number of warmup steps : 0
  • Machine : 8xA100 80GB

Dataset Details :

Dataset : iamplus/Instruction_Tuning

Files :

  • chat_gpt_v1.csv
Downloads last month
3

Dataset used to train iamplus/bloomz-7b1-v1