bytebarde's picture
Update README.md
841344f
|
raw
history blame
1.04 kB
metadata
library_name: peft
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
license: apache-2.0
datasets:
  - vicgalle/alpaca-gpt4
language:
  - en
pipeline_tag: conversational

Model Card for Model ID

TinyLlama/TinyLlama-1.1B-Chat-v1.0 sft on alpaca dataset using LoRA

Model Details

Model Sources [optional]

Training Details

Training Procedure

Training Hyperparameters

  • Training regime: [fp16 mixed precision]
  • Per device train batch size: 4
  • Epoch: 10
  • Loss: 0.9044

Framework versions

  • PEFT 0.7.1