bloomz-7b1-v3 / README.md
manojpreveen's picture
Update README.md
7b74baf
metadata
license: bigscience-openrail-m
datasets:
  - iamplus/Instruction_Tuning

Instruction Tuned Bloomz-7B1 model on ChatGPT dataset (85k data) using Colossal AI

Base Model: bigscience/bloomz-7b1

Training Details :

  • Epochs: 5
  • Batch Size : 32 instantaneous per device x 1 gradient accumulation steps x 8 gpus = 256
  • Max Length : 512
  • Weight Decay : 0
  • Learning Rate : 2e-5
  • Learning Rate Scheduler Type : Cosine
  • Number of warmup steps : 0
  • Machine : 8xA100 80GB

Dataset Details :

Dataset : iamplus/Instruction_Tuning

Files :

  • chat_gpt_v1.csv