Edit model card

TinyLlama/TinyLlama-1.1B-Chat-v1.0 AWQ

Model Summary

This is the chat model finetuned on top of TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T. We follow HF's Zephyr's training recipe. The model was " initially fine-tuned on a variant of the UltraChat dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT. We then further aligned the model with 🤗 TRL's DPOTrainer on the openbmb/UltraFeedback dataset, which contain 64k prompts and model completions that are ranked by GPT-4."

Downloads last month
2
Safetensors
Model size
261M params
Tensor type
I32
·
FP16
·
Inference API
Input a message to start chatting with solidrust/TinyLlama-1.1B-Chat-v1.0-AWQ.
Inference API (serverless) has been turned off for this model.

Datasets used to train solidrust/TinyLlama-1.1B-Chat-v1.0-AWQ

Collection including solidrust/TinyLlama-1.1B-Chat-v1.0-AWQ