MLX
Safetensors
Russian
English
llama
WaveCut commited on
Commit
73a8c90
1 Parent(s): 58bbef3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -9,7 +9,7 @@ datasets:
9
  - dichspace/darulm
10
  ---
11
 
12
- # mlx-community/Vikhr-7B-instruct_0.2_q-4
13
  This model was converted to MLX format from [`Vikhrmodels/Vikhr-7B-instruct_0.2`]() using mlx-lm version **0.6.0**.
14
  Refer to the [original model card](https://huggingface.co/Vikhrmodels/Vikhr-7B-instruct_0.2) for more details on the model.
15
  ## Use with mlx
@@ -21,6 +21,6 @@ pip install mlx-lm
21
  ```python
22
  from mlx_lm import load, generate
23
 
24
- model, tokenizer = load("mlx-community/Vikhr-7B-instruct_0.2_q-4")
25
  response = generate(model, tokenizer, prompt="hello", verbose=True)
26
  ```
 
9
  - dichspace/darulm
10
  ---
11
 
12
+ # mlx-community/Vikhr-7B-instruct_0.2-4bit
13
  This model was converted to MLX format from [`Vikhrmodels/Vikhr-7B-instruct_0.2`]() using mlx-lm version **0.6.0**.
14
  Refer to the [original model card](https://huggingface.co/Vikhrmodels/Vikhr-7B-instruct_0.2) for more details on the model.
15
  ## Use with mlx
 
21
  ```python
22
  from mlx_lm import load, generate
23
 
24
+ model, tokenizer = load("mlx-community/Vikhr-7B-instruct_0.2_4 it")
25
  response = generate(model, tokenizer, prompt="hello", verbose=True)
26
  ```