m-elio commited on
Commit
370ed61
1 Parent(s): 0f26c42

update model card usage example

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -72,7 +72,7 @@ prompt = "Di seguito è riportata un'istruzione che descrive un'attività, accom
72
  input_ids = tokenizer(prompt, return_tensors="pt").input_ids
73
  outputs = model.generate(input_ids=input_ids)
74
 
75
- print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0][len(prompt):])
76
  ```
77
 
78
  If you are facing issues when loading the model, you can try to load it quantized:
 
72
  input_ids = tokenizer(prompt, return_tensors="pt").input_ids
73
  outputs = model.generate(input_ids=input_ids)
74
 
75
+ print(tokenizer.batch_decode(outputs.detach().cpu().numpy()[:, input_ids.shape[1]:], skip_special_tokens=True)[0])
76
  ```
77
 
78
  If you are facing issues when loading the model, you can try to load it quantized: