--- language: - code license: llama2 tags: - llama-2 - mlx pipeline_tag: text-generation --- ![Alt text](https://cdn.discordapp.com/attachments/1064373193982361601/1201677160008384594/DALLE_2024-01-30_00.53.15_-_Imagine_a_whimsical_hyper-detailed_illustration_suitable_for_a_childrens_book_featuring_a_cartoon_alpaca_sitting_comfortably_and_using_an_Apple_lap.png?ex=65cab011&is=65b83b11&hm=373057e35079d276954594d43ea8e9e8223bd4956707a96c130a62850c8570b1&) # mlx-community/CodeLlama-70b-Instruct-hf-4bit-MLX This model was converted to MLX format from [`codellama/CodeLlama-70b-Instruct-hf`](). Refer to the [original model card](https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/CodeLlama-70b-Instruct-hf-4bit-MLX") response = generate(model, tokenizer, prompt="Source: user Fibonacci series in Python Source: assistant Destination: user", verbose=True) ```