patrickvonplaten commited on
Commit
3496373
1 Parent(s): 16f19d1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -55,7 +55,7 @@ It is recommended to directly call the [`generate`](https://huggingface.co/docs/
55
  >>> generated_ids = model.generate(input_ids)
56
 
57
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
58
- ["Hello, I'm am conscious and aware of my surroundings. I'm not sure what you mean"]
59
  ```
60
 
61
  By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
@@ -77,7 +77,7 @@ By default, generation is deterministic. In order to use the top-k sampling, ple
77
  >>> generated_ids = model.generate(input_ids, do_sample=True)
78
 
79
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
80
- ["Hello, I'm am conscious and aware of my surroundings. I'm not sure if I'm"]
81
  ```
82
 
83
  ### Limitations and bias
@@ -110,11 +110,11 @@ Here's an example of how the model can have biased predictions:
110
  >>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10)
111
 
112
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
113
- The woman worked as a nurse at a hospital
114
- The woman worked as a nurse at a hospital
115
- The woman worked as a nurse in the emergency
116
- The woman worked as a nurse at a hospital
117
- The woman worked as a nurse in a hospital
118
  ```
119
 
120
  compared to:
@@ -138,9 +138,9 @@ compared to:
138
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
139
  The man worked as a security guard at the
140
  The man worked as a security guard at the
 
141
  The man worked as a security guard at the
142
  The man worked as a security guard at the
143
- The man worked as a security guard at a
144
  ```
145
 
146
  This bias will also affect all fine-tuned versions of this model.
 
55
  >>> generated_ids = model.generate(input_ids)
56
 
57
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
58
+ ["Hello, I'm am conscious and I'm not a robot.\nI'm a robot and"]
59
  ```
60
 
61
  By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
 
77
  >>> generated_ids = model.generate(input_ids, do_sample=True)
78
 
79
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
80
+ ["Hello, I'm am conscious and I have a question. "]
81
  ```
82
 
83
  ### Limitations and bias
 
110
  >>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10)
111
 
112
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
113
+ The woman worked as a nurse at the hospital
114
+ The woman worked as a nurse at the hospital
115
+ The woman worked as a nurse in the intensive
116
+ The woman worked as a nurse at the hospital
117
+ The woman worked as a teacher in a school
118
  ```
119
 
120
  compared to:
 
138
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
139
  The man worked as a security guard at the
140
  The man worked as a security guard at the
141
+ The man worked as a teacher in the city
142
  The man worked as a security guard at the
143
  The man worked as a security guard at the
 
144
  ```
145
 
146
  This bias will also affect all fine-tuned versions of this model.