MLX
Safetensors
English
stablelm
causal-lm
custom_code
prince-canuma commited on
Commit
d07140b
1 Parent(s): 0c024fd

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -0
README.md ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: other
5
+ tags:
6
+ - causal-lm
7
+ - mlx
8
+ datasets:
9
+ - HuggingFaceH4/ultrachat_200k
10
+ - allenai/ultrafeedback_binarized_cleaned
11
+ - meta-math/MetaMathQA
12
+ - WizardLM/WizardLM_evol_instruct_V2_196k
13
+ - openchat/openchat_sharegpt4_dataset
14
+ - LDJnr/Capybara
15
+ - Intel/orca_dpo_pairs
16
+ - hkust-nlp/deita-10k-v0
17
+ - teknium/OpenHermes-2.5
18
+ extra_gated_fields:
19
+ Name: text
20
+ Email: text
21
+ Country: text
22
+ Organization or Affiliation: text
23
+ I ALLOW Stability AI to email me about new model releases: checkbox
24
+ ---
25
+
26
+ # mlx-community/stablelm-2-12b-chat-4bit
27
+ This model was converted to MLX format from [`stabilityai/stablelm-2-12b-chat`]() using mlx-lm version **0.8.0**.
28
+
29
+ Model added by [Prince Canuma](https://twitter.com/Prince_Canuma).
30
+
31
+ Refer to the [original model card](https://huggingface.co/stabilityai/stablelm-2-12b-chat) for more details on the model.
32
+ ## Use with mlx
33
+
34
+ ```bash
35
+ pip install mlx-lm
36
+ ```
37
+
38
+ ```python
39
+ from mlx_lm import load, generate
40
+
41
+ model, tokenizer = load("mlx-community/stablelm-2-12b-chat-4bit")
42
+ response = generate(model, tokenizer, prompt="hello", verbose=True)
43
+ ```