oxygen65 commited on
Commit
c81104c
1 Parent(s): a8fd398

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -5
README.md CHANGED
@@ -160,8 +160,6 @@ results = []
160
  loop = 0
161
  for data in tqdm(tasks):
162
  task_id = data["task_id"]
163
- if task_id != 66 and task_id != 72:
164
- continue
165
  input = data["input"]
166
  # in context learning用のプロンプト
167
  icl_prompt = create_icl_prompt(input, sample_tasks, task_id)
@@ -170,7 +168,6 @@ for data in tqdm(tasks):
170
  {input}
171
  ### 回答
172
  """
173
- # 1回目
174
  tokenized_input = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt").to(model.device)
175
  with torch.no_grad():
176
  outputs = model.generate(
@@ -201,11 +198,21 @@ for data in tqdm(tasks):
201
 
202
  print(f"task_id: {data['task_id']}, prompt: {prompt}, output: {output}")
203
 
204
- #break
 
 
 
 
 
 
 
 
 
205
  ```
206
 
207
  # Uploaded model
208
 
209
- - **Developed by:** oxygen65
 
210
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
211
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
160
  loop = 0
161
  for data in tqdm(tasks):
162
  task_id = data["task_id"]
 
 
163
  input = data["input"]
164
  # in context learning用のプロンプト
165
  icl_prompt = create_icl_prompt(input, sample_tasks, task_id)
 
168
  {input}
169
  ### 回答
170
  """
 
171
  tokenized_input = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt").to(model.device)
172
  with torch.no_grad():
173
  outputs = model.generate(
 
198
 
199
  print(f"task_id: {data['task_id']}, prompt: {prompt}, output: {output}")
200
 
201
+ ```
202
+
203
+ ### 5. Dump results
204
+ ```python
205
+ import re
206
+ model_name = re.sub(".*/", "", model_name)
207
+ with open(f"./{model_name}-outputs.jsonl", 'w', encoding='utf-8') as f:
208
+ for result in results:
209
+ json.dump(result, f, ensure_ascii=False) # ensure_ascii=False for handling non-ASCII characters
210
+ f.write('\n')
211
  ```
212
 
213
  # Uploaded model
214
 
215
+ - **Developed by:** oxygen65
216
+
217
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
218
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)