MLX
English
llama
Edit model card

TinyLlama-1.1B

https://github.com/jzhang38/TinyLlama

The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. This repository contains the TinyLlama-1.1B-Chat-v0.6 weights in npz format suitable for use with Apple's MLX framework. For more information about the model, please review its model card

How to use

pip install mlx
pip install huggingface_hub
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples

huggingface-cli download --local-dir-use-symlinks False --local-dir tinyllama-1.1B-Chat-v0.6 mlx-community/tinyllama-1.1B-Chat-v0.6

# Run example
python llms/llama/llama.py --model-path tinyllama-1.1B-Chat-v0.6 --prompt "My name is"
Downloads last month
5
Unable to determine this model’s pipeline type. Check the docs .

Finetuned from

Datasets used to train mlx-community/TinyLlama-1.1B-Chat-v0.6