File size: 786 Bytes
f5f8222 702eaa1 f5f8222 a5d6f4a d39f7b9 2dc02d8 22f9e9d 2dc02d8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- sam-mosaic/orca-gpt4-chatml
language:
- en
---
<div align="center">
# TinyLlama-1.1B
Finetuned with ORCA-GPT4 (chatml format)
</div>
This is a fine-tuned version of [TinyLlama-1.1B-intermediate-step-240k-503b](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b) using the [sam-mosaic/orca-gpt4-chatml](https://huggingface.co/datasets/sam-mosaic/orca-gpt4-chatml) dataset.
### Training
- **Method**: QLORA
- **Quantization**: fp16
- **Time**: 20h on a RTX 4090 (from runpod.io)
- **Cost**: About $15
- **Based on**: [https://colab.research.google.com/drive/1Zmaceu65d7w4Tcd-cfnZRb6k_Tcv2b8g](https://colab.research.google.com/drive/1Zmaceu65d7w4Tcd-cfnZRb6k_Tcv2b8g) |