m-elio commited on
Commit
796f068
1 Parent(s): 7d0b157

update model card usage example

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -66,7 +66,7 @@ prompt = "Di seguito è riportata un'istruzione che descrive un'attività, accom
66
  input_ids = tokenizer(prompt, return_tensors="pt").input_ids
67
  outputs = model.generate(input_ids=input_ids)
68
 
69
- print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0][len(prompt):])
70
  ```
71
 
72
  If you are facing issues when loading the model, you can try to load it quantized:
 
66
  input_ids = tokenizer(prompt, return_tensors="pt").input_ids
67
  outputs = model.generate(input_ids=input_ids)
68
 
69
+ print(tokenizer.batch_decode(outputs.detach().cpu().numpy()[:, input_ids.shape[1]:], skip_special_tokens=True)[0])
70
  ```
71
 
72
  If you are facing issues when loading the model, you can try to load it quantized: