--- language: - code license: llama2 tags: - llama-2 - mlx pipeline_tag: text-generation --- ![Alt text](https://cdn.discordapp.com/attachments/1064373193982361601/1202164612645265418/codellama70bbase.png?ex=65cc760a&is=65ba010a&hm=d8db3259380c5faa567b20614af3c1c203a459fa7fbf3e01221bb80d9a95e246&) # mlx-community/CodeLlama-70b-hf-4bit-MLX This model was converted to MLX format from [`codellama/CodeLlama-70b-hf`](). Refer to the [original model card](https://huggingface.co/codellama/CodeLlama-70b-hf) for more details on the model. This can be used as base for additional fine-tuning experiments. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/CodeLlama-70b-hf-4bit-MLX") response = generate(model, tokenizer, prompt="hello", verbose=True) ```