MLX
English
mistral
mc0ps commited on
Commit
c9e453e
1 Parent(s): a8629ff

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -5
README.md CHANGED
@@ -14,13 +14,18 @@ datasets:
14
  - LDJnr/Capybara
15
  ---
16
 
17
- # dolphin-2.6-mistral-7b-dpo-laser-mlx
18
  This model was converted to MLX format from [`cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser`]().
19
  Refer to the [original model card](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser) for more details on the model.
20
  ## Use with mlx
 
21
  ```bash
22
- pip install mlx
23
- git clone https://github.com/ml-explore/mlx-examples.git
24
- cd mlx-examples/llms/hf_llm
25
- python generate.py --model mlx-community/dolphin-2.6-mistral-7b-dpo-laser-mlx --prompt "My name is"
 
 
 
 
26
  ```
 
14
  - LDJnr/Capybara
15
  ---
16
 
17
+ # mlx-community/dolphin-2.6-mistral-7b-dpo-laser-mlx
18
  This model was converted to MLX format from [`cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser`]().
19
  Refer to the [original model card](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser) for more details on the model.
20
  ## Use with mlx
21
+
22
  ```bash
23
+ pip install mlx-lm
24
+ ```
25
+
26
+ ```python
27
+ from mlx_lm import load, generate
28
+
29
+ model, tokenizer = load("mlx-community/dolphin-2.6-mistral-7b-dpo-laser-mlx")
30
+ response = generate(model, tokenizer, prompt="hello", verbose=True)
31
  ```