MLX
Safetensors
dbrx
custom_code
eek commited on
Commit
913dd97
1 Parent(s): b7a884e

ADD: Python example in Readme

Browse files
Files changed (1) hide show
  1. README.md +19 -3
README.md CHANGED
@@ -93,11 +93,27 @@ if the mlx-lm package was updated it can also be installed from pip:
93
  pip install mlx-lm
94
  ```
95
 
 
 
96
  ```python
97
  from mlx_lm import load, generate
98
 
99
- model, tokenizer = load("mlx-community/dbrx-instruct-4bit")
100
- response = generate(model, tokenizer, prompt="hello", verbose=True)
101
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
102
 
103
  Converted and uploaded by [eek](https://huggingface.co/eek)
 
93
  pip install mlx-lm
94
  ```
95
 
96
+ To use it from Python you can do the following:
97
+
98
  ```python
99
  from mlx_lm import load, generate
100
 
101
+ model, tokenizer = load(
102
+ "/Users/eek/work/dbrx-instruct-4bit/",
103
+ tokenizer_config={"trust_remote_code": True}
104
+ )
105
+
106
+ chat = [
107
+ {"role": "user", "content": "What's the difference between PCA vs UMAP vs t-SNE?"},
108
+ # We need to add the Assistant role as well, otherwise mlx_lm will error on generation.
109
+ {"role": "assistant", "content": "The "},
110
+ ]
111
+
112
+ prompt = tokenizer.apply_chat_template(chat, tokenize=False)
113
+
114
+ # We need to remove the last <|im_end|> token so that the AI continues generation
115
+ prompt = prompt[::-1].replace("<|im_end|>"[::-1], "", 1)[::-1]
116
+
117
+ response = generate(model, tokenizer, prompt=prompt, verbose=True, temp=0.6, max_tokens=1500)```
118
 
119
  Converted and uploaded by [eek](https://huggingface.co/eek)