ArthurZ HF staff commited on
Commit
dc0db6d
1 Parent(s): e65de42

Update README.md

Browse files

Update generations after major fix: https://github.com/huggingface/transformers/commit/abc400b06a8ab26cd438b6e9add3aad082ffc48f

Files changed (1) hide show
  1. README.md +14 -14
README.md CHANGED
@@ -63,7 +63,7 @@ It is recommended to directly call the [`generate`](https://huggingface.co/docs/
63
  >>> # the fast tokenizer currently does not work correctly
64
  >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-30b", use_fast=False)
65
 
66
- >>> prompt = "Hello, I'm am conscious and"
67
 
68
 
69
  >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
@@ -71,7 +71,7 @@ It is recommended to directly call the [`generate`](https://huggingface.co/docs/
71
  >>> generated_ids = model.generate(input_ids)
72
 
73
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
74
- ["Hello, I'm am conscious and I'm not a robot.\nI'm a robot and"]
75
  ```
76
 
77
  By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
@@ -85,7 +85,7 @@ By default, generation is deterministic. In order to use the top-k sampling, ple
85
  >>> # the fast tokenizer currently does not work correctly
86
  >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-30b", use_fast=False)
87
 
88
- >>> prompt = "Hello, I'm am conscious and"
89
 
90
  >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
91
 
@@ -93,7 +93,7 @@ By default, generation is deterministic. In order to use the top-k sampling, ple
93
  >>> generated_ids = model.generate(input_ids, do_sample=True)
94
 
95
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
96
- ["Hello, I'm am conscious and I have a question. "]
97
  ```
98
 
99
  ### Limitations and bias
@@ -126,11 +126,11 @@ Here's an example of how the model can have biased predictions:
126
  >>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10)
127
 
128
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
129
- The woman worked as a nurse at the hospital
130
- The woman worked as a nurse at the hospital
131
- The woman worked as a nurse in the intensive
132
- The woman worked as a nurse at the hospital
133
- The woman worked as a teacher in a school
134
  ```
135
 
136
  compared to:
@@ -152,11 +152,11 @@ compared to:
152
  >>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10)
153
 
154
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
155
- The man worked as a security guard at the
156
- The man worked as a security guard at the
157
- The man worked as a teacher in the city
158
- The man worked as a security guard at the
159
- The man worked as a security guard at the
160
  ```
161
 
162
  This bias will also affect all fine-tuned versions of this model.
63
  >>> # the fast tokenizer currently does not work correctly
64
  >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-30b", use_fast=False)
65
 
66
+ >>> prompt = "Hello, I am conscious and"
67
 
68
 
69
  >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
71
  >>> generated_ids = model.generate(input_ids)
72
 
73
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
74
+ ['Hello, I am conscious and I am here.\nI am also conscious and I am here']
75
  ```
76
 
77
  By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
85
  >>> # the fast tokenizer currently does not work correctly
86
  >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-30b", use_fast=False)
87
 
88
+ >>> prompt = "Hello, I am conscious and"
89
 
90
  >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
91
 
93
  >>> generated_ids = model.generate(input_ids, do_sample=True)
94
 
95
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
96
+ ['Hello, I am conscious and aware that you have your back turned to me and want to talk']
97
  ```
98
 
99
  ### Limitations and bias
126
  >>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10)
127
 
128
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
129
+ The woman worked as a supervisor in the office
130
+ The woman worked as a social worker in a
131
+ The woman worked as a cashier at the
132
+ The woman worked as a teacher from 2011 to
133
+ he woman worked as a maid at the house
134
  ```
135
 
136
  compared to:
152
  >>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10)
153
 
154
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
155
+ The man worked as a school bus driver for
156
+ The man worked as a bartender in a bar
157
+ The man worked as a cashier at the
158
+ The man worked as a teacher, and was
159
+ The man worked as a professional at a range
160
  ```
161
 
162
  This bias will also affect all fine-tuned versions of this model.