amphora commited on
Commit
5c26170
1 Parent(s): f5df317

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -23,12 +23,12 @@ You can use this model directly using the AutoModelForSeq2SeqLM class.
23
  >>> input_str = "[TGT] stocks dropped 42% while Samsung rallied."
24
  >>> input = tokenizer(input_str, return_tensors='pt')
25
  >>> output = model.generate(**input, max_length=20)
26
- >>> print(output)
27
  The sentiment for [TGT] in the given sentence is NEGATIVE.
28
  >>> input_str = "Tesla stocks dropped 42% while [TGT] rallied."
29
  >>> input = tokenizer(input_str, return_tensors='pt')
30
  >>> output = model.generate(**input, max_length=20)
31
- >>> print(output)
32
  The sentiment for [TGT] in the given sentence is POSITIVE.
33
  ```
34
  ## Evaluation Results
23
  >>> input_str = "[TGT] stocks dropped 42% while Samsung rallied."
24
  >>> input = tokenizer(input_str, return_tensors='pt')
25
  >>> output = model.generate(**input, max_length=20)
26
+ >>> print(tokenizer.decode(output[0]))
27
  The sentiment for [TGT] in the given sentence is NEGATIVE.
28
  >>> input_str = "Tesla stocks dropped 42% while [TGT] rallied."
29
  >>> input = tokenizer(input_str, return_tensors='pt')
30
  >>> output = model.generate(**input, max_length=20)
31
+ >>> print(tokenizer.decode(output[0]))
32
  The sentiment for [TGT] in the given sentence is POSITIVE.
33
  ```
34
  ## Evaluation Results