Instruction Tuned Bloomz-7B1 model on Stanford Alpaca Instruction Tuning dataset (52k data) using Colossal AI
Base Model: bigscience/bloomz-7b1
Training Details :
- Epochs: 5
- Batch Size : 32 instantaneous per device x 1 gradient accumulation steps x 8 gpus = 256
- Max Length : 512
- Weight Decay : 0
- Learning Rate : 2e-5
- Learning Rate Scheduler Type : Cosine
- Number of warmup steps : 0
- Machine : 8xA100 80GB
Dataset Details :
Dataset : iamplus/Instruction_Tuning
Files :
- stanford_alpaca_it.csv
- Downloads last month
- 2
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.