Update code snippet
#3
by
pcuenq
HF staff
- opened
README.md
CHANGED
@@ -18,7 +18,6 @@ Weights have been converted to `float16` from the original `bfloat16` type, beca
|
|
18 |
How to use with [MLX](https://github.com/ml-explore/mlx).
|
19 |
|
20 |
```bash
|
21 |
-
|
22 |
# Install mlx, mlx-examples, huggingface-cli
|
23 |
pip install mlx
|
24 |
pip install huggingface_hub hf_transfer
|
@@ -29,7 +28,7 @@ export HF_HUB_ENABLE_HF_TRANSFER=1
|
|
29 |
huggingface-cli download --local-dir CodeLlama-7b-Python-mlx mlx-llama/CodeLlama-7b-Python-mlx
|
30 |
|
31 |
# Run example
|
32 |
-
python mlx-examples/llama/llama.py CodeLlama-7b-Python-mlx/ CodeLlama-7b-Python-mlx/tokenizer.model
|
33 |
```
|
34 |
|
35 |
Please, refer to the [original model card](https://github.com/facebookresearch/codellama/blob/main/MODEL_CARD.md) for details on CodeLlama.
|
|
|
18 |
How to use with [MLX](https://github.com/ml-explore/mlx).
|
19 |
|
20 |
```bash
|
|
|
21 |
# Install mlx, mlx-examples, huggingface-cli
|
22 |
pip install mlx
|
23 |
pip install huggingface_hub hf_transfer
|
|
|
28 |
huggingface-cli download --local-dir CodeLlama-7b-Python-mlx mlx-llama/CodeLlama-7b-Python-mlx
|
29 |
|
30 |
# Run example
|
31 |
+
python mlx-examples/llama/llama.py --prompt "def fibonacci(n):" CodeLlama-7b-Python-mlx/ CodeLlama-7b-Python-mlx/tokenizer.model --max-tokens 200
|
32 |
```
|
33 |
|
34 |
Please, refer to the [original model card](https://github.com/facebookresearch/codellama/blob/main/MODEL_CARD.md) for details on CodeLlama.
|