savasy commited on
Commit
85307d8
1 Parent(s): b4fb709

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -36,10 +36,13 @@ with torch.no_grad():
36
  inputs = tokenizer(test_input, return_tensors="pt", padding=True).to("cuda")
37
  generated_ids = inference_model.generate(**inputs)
38
  outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
39
- outputs```
40
  -> [Ahmed]
 
 
41
 
42
  The usage for batch mode is as follows:
 
43
  ```
44
  from peft import PeftModel, PeftConfig
45
  peft_model_path="savasy/mt0-large-Turkish-qa"
@@ -66,5 +69,5 @@ with torch.no_grad():
66
  outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
67
  preds+=outputs
68
  ```
69
- # compare preds with your expected ground-truth results
70
 
 
36
  inputs = tokenizer(test_input, return_tensors="pt", padding=True).to("cuda")
37
  generated_ids = inference_model.generate(**inputs)
38
  outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
39
+ outputs
40
  -> [Ahmed]
41
+ ```
42
+
43
 
44
  The usage for batch mode is as follows:
45
+
46
  ```
47
  from peft import PeftModel, PeftConfig
48
  peft_model_path="savasy/mt0-large-Turkish-qa"
 
69
  outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
70
  preds+=outputs
71
  ```
72
+ At the end, you can compare *preds* with your expected ground-truth results
73