tinyllama-bnb-4bit / README.md
danielhanchen's picture
Create README.md
f8a1405
|
raw
history blame
344 Bytes
metadata
license: apache-2.0

Directly quantized 4bit model with bitsandbytes.

Unsloth can finetune LLMs with QLoRA 2.2x faster and use 62% less memory!

We have a Google Colab Tesla T4 notebook for TinyLlama with 4096 max sequence length RoPE Scaling here: https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing