Upload folder using huggingface_hub

#5
Files changed (2) hide show
  1. README.md +12 -3
  2. config.json +4 -0
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- base_model: google/gemma-2-2b-it
3
  language:
4
  - ja
5
  library_name: transformers
@@ -17,7 +17,7 @@ extra_gated_button_content: Acknowledge license
17
 
18
  # mlx-community/gemma-2-2b-jpn-it-8bit
19
 
20
- The Model [mlx-community/gemma-2-2b-jpn-it-8bit](https://huggingface.co/mlx-community/gemma-2-2b-jpn-it-8bit) was converted to MLX format from [google/gemma-2-2b-jpn-it](https://huggingface.co/google/gemma-2-2b-jpn-it) using mlx-lm version **0.17.1**.
21
 
22
  ## Use with mlx
23
 
@@ -29,5 +29,14 @@ pip install mlx-lm
29
  from mlx_lm import load, generate
30
 
31
  model, tokenizer = load("mlx-community/gemma-2-2b-jpn-it-8bit")
32
- response = generate(model, tokenizer, prompt="hello", verbose=True)
 
 
 
 
 
 
 
 
 
33
  ```
 
1
  ---
2
+ base_model: google/gemma-2-2b-jpn-it
3
  language:
4
  - ja
5
  library_name: transformers
 
17
 
18
  # mlx-community/gemma-2-2b-jpn-it-8bit
19
 
20
+ The Model [mlx-community/gemma-2-2b-jpn-it-8bit](https://huggingface.co/mlx-community/gemma-2-2b-jpn-it-8bit) was converted to MLX format from [google/gemma-2-2b-jpn-it](https://huggingface.co/google/gemma-2-2b-jpn-it) using mlx-lm version **0.19.0**.
21
 
22
  ## Use with mlx
23
 
 
29
  from mlx_lm import load, generate
30
 
31
  model, tokenizer = load("mlx-community/gemma-2-2b-jpn-it-8bit")
32
+
33
+ prompt="hello"
34
+
35
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
36
+ messages = [{"role": "user", "content": prompt}]
37
+ prompt = tokenizer.apply_chat_template(
38
+ messages, tokenize=False, add_generation_prompt=True
39
+ )
40
+
41
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
42
  ```
config.json CHANGED
@@ -25,6 +25,10 @@
25
  "group_size": 64,
26
  "bits": 8
27
  },
 
 
 
 
28
  "query_pre_attn_scalar": 224,
29
  "rms_norm_eps": 1e-06,
30
  "rope_theta": 10000.0,
 
25
  "group_size": 64,
26
  "bits": 8
27
  },
28
+ "quantization_config": {
29
+ "group_size": 64,
30
+ "bits": 8
31
+ },
32
  "query_pre_attn_scalar": 224,
33
  "rms_norm_eps": 1e-06,
34
  "rope_theta": 10000.0,