File size: 344 Bytes
931fb5f |
1 2 3 4 5 6 7 8 |
---
license: apache-2.0
---
Directly quantized 4bit model with `bitsandbytes`.
Unsloth can finetune LLMs with QLoRA 2.2x faster and use 62% less memory!
We have a Google Colab Tesla T4 notebook for TinyLlama with 4096 max sequence length RoPE Scaling here: https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing |