Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

TinyLlama-1.1B-1T-OpenOrca - bnb 8bits

Original model description:

license: apache-2.0 datasets: - Open-Orca/OpenOrca - bigcode/starcoderdata - cerebras/SlimPajama-627B language: - en

Built with Axolotl

Base model:

PY007/TinyLlama-1.1B-intermediate-step-480k-1T

Dataset:

Fine tuned on OpenOrca GPT4 subset for 1 epoch,Using CHATML format

Model License:

Apache 2.0, following the TinyLlama base model.

Quantisation:

Hardware and training details:

Hardware: 1*RTX A5000, ~16 hours to complete 1 epoch. GPU from autodl.com, cost around $3 for this finetuning. https://wandb.ai/jeff200402/TinyLlama-Orca?workspace= for more details.

Downloads last month
0
Safetensors
Model size
1.1B params
Tensor type
F32
FP16
I8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.