|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- cerebras/SlimPajama-627B |
|
- bigcode/starcoderdata |
|
- sam-mosaic/orca-gpt4-chatml |
|
language: |
|
- en |
|
--- |
|
<div align="center"> |
|
|
|
# TinyLlama-1.1B |
|
|
|
Finetuned with ORCA-GPT4 (chatml format) |
|
|
|
</div> |
|
|
|
This is a fine-tuned version of [TinyLlama-1.1B-intermediate-step-240k-503b](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b) using the [sam-mosaic/orca-gpt4-chatml](https://huggingface.co/datasets/sam-mosaic/orca-gpt4-chatml) dataset. |
|
|
|
### Training |
|
- **Method**: QLORA |
|
- **Quantization**: fp16 |
|
- **Time**: 20h on a RTX 4090 (from runpod.io) |
|
- **Cost**: About $15 |
|
- **Based on**: [https://colab.research.google.com/drive/1Zmaceu65d7w4Tcd-cfnZRb6k_Tcv2b8g](https://colab.research.google.com/drive/1Zmaceu65d7w4Tcd-cfnZRb6k_Tcv2b8g) |