ivanfioravanti commited on
Commit
0ec79da
1 Parent(s): d3d9920

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -8,6 +8,8 @@ tags:
8
  pipeline_tag: text-generation
9
  ---
10
 
 
 
11
  # mlx-community/CodeLlama-70b-Python-hf-4bit-MLX
12
  This model was converted to MLX format from [`codellama/CodeLlama-70b-Python-hf`]().
13
  Refer to the [original model card](https://huggingface.co/codellama/CodeLlama-70b-Python-hf) for more details on the model.
@@ -21,5 +23,5 @@ pip install mlx-lm
21
  from mlx_lm import load, generate
22
 
23
  model, tokenizer = load("mlx-community/CodeLlama-70b-Python-hf-4bit-MLX")
24
- response = generate(model, tokenizer, prompt="hello", verbose=True)
25
  ```
 
8
  pipeline_tag: text-generation
9
  ---
10
 
11
+ ![Alt text](https://cdn.discordapp.com/attachments/1064373193982361601/1201679652717076560/codellama70bpythonmlx.png?ex=65cab263&is=65b83d63&hm=9748823267a97da4cda34e932fb93246dfe45de181c941dbad00f345d13973a0&)
12
+
13
  # mlx-community/CodeLlama-70b-Python-hf-4bit-MLX
14
  This model was converted to MLX format from [`codellama/CodeLlama-70b-Python-hf`]().
15
  Refer to the [original model card](https://huggingface.co/codellama/CodeLlama-70b-Python-hf) for more details on the model.
 
23
  from mlx_lm import load, generate
24
 
25
  model, tokenizer = load("mlx-community/CodeLlama-70b-Python-hf-4bit-MLX")
26
+ response = generate(model, tokenizer, prompt="<step>Source: user Fibonacci series in Python<step> Source: assistant Destination: user", verbose=True)
27
  ```