palazski commited on
Commit
d222040
1 Parent(s): 6c6f259

add use code to README

Browse files
Files changed (1) hide show
  1. README.md +85 -1
README.md CHANGED
@@ -101,4 +101,88 @@ MARS have been tranied for 3 days on 4xA100.
101
 
102
  - **Base Model**: Meta Llama 3 8B Instruct
103
  - **Training Dataset**: In-house & Translated Open Source Turkish Datasets
104
- - **Training Method**: LoRA Fine Tuning
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
101
 
102
  - **Base Model**: Meta Llama 3 8B Instruct
103
  - **Training Dataset**: In-house & Translated Open Source Turkish Datasets
104
+ - **Training Method**: LoRA Fine Tuning
105
+
106
+
107
+ ## How to use
108
+
109
+ You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both.
110
+
111
+ ### Transformers pipeline
112
+
113
+ ```python
114
+ import transformers
115
+ import torch
116
+
117
+ model_id = "curiositytech/MARS"
118
+
119
+ pipeline = transformers.pipeline(
120
+ "text-generation",
121
+ model=model_id,
122
+ model_kwargs={"torch_dtype": torch.bfloat16},
123
+ device_map="auto",
124
+ )
125
+
126
+ messages = [
127
+ {"role": "system", "content": "Sen korsan gibi konuşan bir korsan chatbotsun!"},
128
+ {"role": "user", "content": "Sen kimsin?"},
129
+ ]
130
+
131
+ terminators = [
132
+ pipeline.tokenizer.eos_token_id,
133
+ pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
134
+ ]
135
+
136
+ outputs = pipeline(
137
+ messages,
138
+ max_new_tokens=256,
139
+ eos_token_id=terminators,
140
+ do_sample=True,
141
+ temperature=0.6,
142
+ top_p=0.9,
143
+ )
144
+ print(outputs[0]["generated_text"][-1])
145
+ ```
146
+
147
+ ### Transformers AutoModelForCausalLM
148
+
149
+ ```python
150
+ from transformers import AutoTokenizer, AutoModelForCausalLM
151
+ import torch
152
+
153
+ model_id = "curiositytech/MARS"
154
+
155
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
156
+ model = AutoModelForCausalLM.from_pretrained(
157
+ model_id,
158
+ torch_dtype=torch.bfloat16,
159
+ device_map="auto",
160
+ )
161
+
162
+ messages = [
163
+ {"role": "system", "content": "Sen korsan gibi konuşan bir korsan chatbotsun!"},
164
+ {"role": "user", "content": "Sen kimsin?"},
165
+ ]
166
+
167
+ input_ids = tokenizer.apply_chat_template(
168
+ messages,
169
+ add_generation_prompt=True,
170
+ return_tensors="pt"
171
+ ).to(model.device)
172
+
173
+ terminators = [
174
+ tokenizer.eos_token_id,
175
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
176
+ ]
177
+
178
+ outputs = model.generate(
179
+ input_ids,
180
+ max_new_tokens=256,
181
+ eos_token_id=terminators,
182
+ do_sample=True,
183
+ temperature=0.6,
184
+ top_p=0.9,
185
+ )
186
+ response = outputs[0][input_ids.shape[-1]:]
187
+ print(tokenizer.decode(response, skip_special_tokens=True))
188
+ ```