ArthurZ HF staff commited on
Commit
ffd1ec5
1 Parent(s): d958377

Update README.md

Browse files

Update generations after major fix: https://github.com/huggingface/transformers/commit/abc400b06a8ab26cd438b6e9add3aad082ffc48f

Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -63,14 +63,14 @@ It is recommended to directly call the [`generate`](https://huggingface.co/docs/
63
  >>> # the fast tokenizer currently does not work correctly
64
  >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b", use_fast=False)
65
 
66
- >>> prompt = "Hello, I'm am conscious and"
67
 
68
  >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
69
 
70
  >>> generated_ids = model.generate(input_ids)
71
 
72
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
73
- ["Hello, I'm am conscious and aware of my surroundings. I'm not sure what you mean"]
74
  ```
75
 
76
  By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
@@ -84,7 +84,7 @@ By default, generation is deterministic. In order to use the top-k sampling, ple
84
  >>> # the fast tokenizer currently does not work correctly
85
  >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b", use_fast=False)
86
 
87
- >>> prompt = "Hello, I'm am conscious and"
88
 
89
  >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
90
 
@@ -92,7 +92,7 @@ By default, generation is deterministic. In order to use the top-k sampling, ple
92
  >>> generated_ids = model.generate(input_ids, do_sample=True)
93
 
94
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
95
- ["Hello, I'm am conscious and aware of my surroundings. I'm not sure if I'm"]
96
  ```
97
 
98
  ### Limitations and bias
63
  >>> # the fast tokenizer currently does not work correctly
64
  >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b", use_fast=False)
65
 
66
+ >>> prompt = "Hello, I'm conscious and"
67
 
68
  >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
69
 
70
  >>> generated_ids = model.generate(input_ids)
71
 
72
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
73
+ ['Hello, I am conscious and aware of my surroundings.\nI am also conscious and aware of']
74
  ```
75
 
76
  By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
84
  >>> # the fast tokenizer currently does not work correctly
85
  >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b", use_fast=False)
86
 
87
+ >>> prompt = "Hello, I'm conscious and"
88
 
89
  >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
90
 
92
  >>> generated_ids = model.generate(input_ids, do_sample=True)
93
 
94
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
95
+ ['Hello, I am conscious and aware.\nSo that makes you dead\nNot always.']
96
  ```
97
 
98
  ### Limitations and bias