Text Generation
MLX
English
llama
4-bit precision
conversational
Felladrin commited on
Commit
31964ee
1 Parent(s): 8ff2de7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -105
README.md CHANGED
@@ -5,6 +5,7 @@ license: apache-2.0
5
  tags:
6
  - text-generation
7
  - mlx
 
8
  datasets:
9
  - ehartford/wizard_vicuna_70k_unfiltered
10
  - totally-not-an-llm/EverythingLM-data-V3
@@ -12,118 +13,17 @@ datasets:
12
  - databricks/databricks-dolly-15k
13
  - THUDM/webglm-qa
14
  base_model: JackFram/llama-160m
15
- widget:
16
- - text: '<|im_start|>system
17
-
18
- You are a helpful assistant, who answers with empathy.<|im_end|>
19
-
20
- <|im_start|>user
21
-
22
- Got a question for you!<|im_end|>
23
-
24
- <|im_start|>assistant
25
-
26
- Sure! What''s it?<|im_end|>
27
-
28
- <|im_start|>user
29
-
30
- Why do you love cats so much!? 🐈<|im_end|>
31
-
32
- <|im_start|>assistant'
33
- - text: '<|im_start|>system
34
-
35
- You are a helpful assistant who answers user''s questions with empathy.<|im_end|>
36
-
37
- <|im_start|>user
38
-
39
- Who is Mona Lisa?<|im_end|>
40
-
41
- <|im_start|>assistant'
42
- - text: '<|im_start|>system
43
-
44
- You are a helpful assistant who provides concise responses.<|im_end|>
45
-
46
- <|im_start|>user
47
-
48
- Heya!<|im_end|>
49
-
50
- <|im_start|>assistant
51
-
52
- Hi! How may I help you today?<|im_end|>
53
-
54
- <|im_start|>user
55
-
56
- I need to build a simple website. Where should I start learning about web development?<|im_end|>
57
-
58
- <|im_start|>assistant'
59
- - text: '<|im_start|>user
60
-
61
- Invited some friends to come home today. Give me some ideas for games to play
62
- with them!<|im_end|>
63
-
64
- <|im_start|>assistant'
65
- - text: '<|im_start|>system
66
-
67
- You are a helpful assistant who answers user''s questions with details and curiosity.<|im_end|>
68
-
69
- <|im_start|>user
70
-
71
- What are some potential applications for quantum computing?<|im_end|>
72
-
73
- <|im_start|>assistant'
74
- - text: '<|im_start|>system
75
-
76
- You are a helpful assistant who gives creative responses.<|im_end|>
77
-
78
- <|im_start|>user
79
-
80
- Write the specs of a game about mages in a fantasy world.<|im_end|>
81
-
82
- <|im_start|>assistant'
83
- - text: '<|im_start|>system
84
-
85
- You are a helpful assistant who answers user''s questions with details.<|im_end|>
86
-
87
- <|im_start|>user
88
-
89
- Tell me about the pros and cons of social media.<|im_end|>
90
-
91
- <|im_start|>assistant'
92
- - text: '<|im_start|>system
93
-
94
- You are a helpful assistant who answers user''s questions with confidence.<|im_end|>
95
-
96
- <|im_start|>user
97
-
98
- What is a dog?<|im_end|>
99
-
100
- <|im_start|>assistant
101
-
102
- A dog is a four-legged, domesticated animal that is a member of the class Mammalia,
103
- which includes all mammals. Dogs are known for their loyalty, playfulness, and
104
- ability to be trained for various tasks. They are also used for hunting, herding,
105
- and as service animals.<|im_end|>
106
-
107
- <|im_start|>user
108
-
109
- What is the color of an apple?<|im_end|>
110
-
111
- <|im_start|>assistant'
112
- inference:
113
- parameters:
114
- max_new_tokens: 250
115
- penalty_alpha: 0.5
116
- top_k: 4
117
- repetition_penalty: 1.01
118
  ---
119
 
120
  # Llama-160M-Chat-v1-4bit-mlx
121
- This model was converted to MLX format from [`Felladrin/Llama-160M-Chat-v1`]().
122
  Refer to the [original model card](https://huggingface.co/Felladrin/Llama-160M-Chat-v1) for more details on the model.
 
123
  ## Use with mlx
 
124
  ```bash
125
  pip install mlx
126
  git clone https://github.com/ml-explore/mlx-examples.git
127
  cd mlx-examples/llms/hf_llm
128
- python generate.py --model mlx-community/Llama-160M-Chat-v1-4bit-mlx --prompt "My name is"
129
  ```
 
5
  tags:
6
  - text-generation
7
  - mlx
8
+ - 4-bit
9
  datasets:
10
  - ehartford/wizard_vicuna_70k_unfiltered
11
  - totally-not-an-llm/EverythingLM-data-V3
 
13
  - databricks/databricks-dolly-15k
14
  - THUDM/webglm-qa
15
  base_model: JackFram/llama-160m
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  ---
17
 
18
  # Llama-160M-Chat-v1-4bit-mlx
19
+ This model was converted to MLX format from [`Felladrin/Llama-160M-Chat-v1`](https://huggingface.co/Felladrin/Llama-160M-Chat-v1).
20
  Refer to the [original model card](https://huggingface.co/Felladrin/Llama-160M-Chat-v1) for more details on the model.
21
+
22
  ## Use with mlx
23
+
24
  ```bash
25
  pip install mlx
26
  git clone https://github.com/ml-explore/mlx-examples.git
27
  cd mlx-examples/llms/hf_llm
28
+ python generate.py --model mlx-community/Llama-160M-Chat-v1-4bit-mlx --prompt "<|im_start|>system\nYou are a helpful assistant who answers user's questions with details and curiosity.<|im_end|>\n<|im_start|>user\nWhat are some potential applications for quantum computing?<|im_end|>\n<|im_start|>assistant"
29
  ```