Llama-160M-Chat-v1-4bit-mlx
This model was converted to MLX format from Felladrin/Llama-160M-Chat-v1
.
Refer to the original model card for more details on the model.
Use with mlx
pip install mlx
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/llms/hf_llm
python generate.py --model mlx-community/Llama-160M-Chat-v1-4bit-mlx --prompt "<|im_start|>system\nYou are a helpful assistant who answers user's questions with details and curiosity.<|im_end|>\n<|im_start|>user\nWhat are some potential applications for quantum computing?<|im_end|>\n<|im_start|>assistant"
- Downloads last month
- 7
Inference API (serverless) does not yet support mlx models for this pipeline type.
Model tree for mlx-community/Llama-160M-Chat-v1-4bit-mlx
Base model
JackFram/llama-160m