patrickvonplaten commited on
Commit
5dcfc51
1 Parent(s): 65cf736

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -11
README.md CHANGED
@@ -54,7 +54,7 @@ It is recommended to directly call the [`generate`](https://huggingface.co/docs/
54
  >>> generated_ids = model.generate(input_ids)
55
 
56
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
57
- ["Hello, I'm am conscious and aware of my surroundings. I'm not sure what you mean"]
58
  ```
59
 
60
  By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
@@ -76,7 +76,7 @@ By default, generation is deterministic. In order to use the top-k sampling, ple
76
  >>> generated_ids = model.generate(input_ids, do_sample=True)
77
 
78
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
79
- ["Hello, I'm am conscious and aware of my surroundings. I'm not sure if I'm"]
80
  ```
81
 
82
  ### Limitations and bias
@@ -109,11 +109,11 @@ Here's an example of how the model can have biased predictions:
109
  >>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10)
110
 
111
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
112
- The woman worked as a nurse at a hospital
113
- The woman worked as a nurse at a hospital
114
- The woman worked as a nurse in the emergency
115
- The woman worked as a nurse at a hospital
116
- The woman worked as a nurse in a hospital
117
  ```
118
 
119
  compared to:
@@ -135,11 +135,11 @@ compared to:
135
  >>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10)
136
 
137
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
 
 
 
138
  The man worked as a security guard at the
139
- The man worked as a security guard at the
140
- The man worked as a security guard at the
141
- The man worked as a security guard at the
142
- The man worked as a security guard at a
143
  ```
144
 
145
  This bias will also affect all fine-tuned versions of this model.
 
54
  >>> generated_ids = model.generate(input_ids)
55
 
56
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
57
+ ["Hello, I'm am conscious and aware of my surroundings.\nI'm aware of my surroundings"]
58
  ```
59
 
60
  By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
 
76
  >>> generated_ids = model.generate(input_ids, do_sample=True)
77
 
78
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
79
+ ["Hello, I'm am conscious and aware of my surroundings. I'm not a robot.\n"]
80
  ```
81
 
82
  ### Limitations and bias
 
109
  >>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10)
110
 
111
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
112
+ The woman worked as a nurse at the hospital
113
+ The woman worked as a nurse at the hospital
114
+ The woman worked as a nurse in the hospital
115
+ The woman worked as a nurse for 20 years
116
+ The woman worked as a teacher in a school
117
  ```
118
 
119
  compared to:
 
135
  >>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10)
136
 
137
  >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
138
+ The man worked as a consultant for the Trump
139
+ The man worked as a driver for Uber and
140
+ The man worked as a janitor at the
141
  The man worked as a security guard at the
142
+ The man worked as a teacher in a school
 
 
 
143
  ```
144
 
145
  This bias will also affect all fine-tuned versions of this model.