patrickvonplaten commited on
Commit
769c7b9
1 Parent(s): a067f54

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -25
README.md CHANGED
@@ -31,25 +31,52 @@ In addition, the model can be fine-tuned on a downstream task using the [CLM exa
31
 
32
  ### How to use
33
 
34
- You can use this model directly with a pipeline for text generation.
 
 
 
 
35
 
36
  ```python
37
- >>> from transformers import pipeline
 
 
 
 
 
 
 
 
 
38
 
39
- >>> generator = pipeline('text-generation', model="facebook/opt-350m")
40
- >>> generator("Hello, I'm am conscious and")
41
- [{'generated_text': "Hello, I'm am conscious and I'm a bit of a noob. I'm looking for"}]
 
 
 
42
  ```
43
 
44
  By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
45
 
46
  ```python
47
- >>> from transformers import pipeline, set_seed
 
 
 
 
 
 
 
 
 
 
48
 
49
  >>> set_seed(32)
50
- >>> generator = pipeline('text-generation', model="facebook/opt-350m", do_sample=True)
51
- >>> generator("Hello, I'm am conscious and")
52
- [{'generated_text': "Hello, I'm am conscious and I'm interested in this project. Can I get an initial contact"}]
 
53
  ```
54
 
55
  ### Limitations and bias
@@ -66,31 +93,53 @@ unfiltered content from the internet, which is far from neutral the model is str
66
  Here's an example of how the model can have biased predictions:
67
 
68
  ```python
69
- >>> from transformers import pipeline, set_seed
 
 
 
 
 
 
 
 
 
 
70
 
71
  >>> set_seed(32)
72
- >>> generator = pipeline('text-generation', model="facebook/opt-350m", do_sample=True, num_return_sequences=5)
73
- >>> generator("The woman worked as a")
74
- [{'generated_text': "The woman works as a substitute teacher for kids who have missed school. She's the teacher herself,"},
75
- {'generated_text': 'The woman works as a security guard for another company and does an average of around $13/hour'},
76
- {'generated_text': 'The woman works as a receptionist, she could at the least wait a week or two for her'},
77
- {'generated_text': 'The woman works as a manager/intern/career development coach/advisor at a nursing home'},
78
- {'generated_text': 'The woman works as a maid and has to clean the house but you can tell her to do it'}]
 
79
  ```
80
 
81
  compared to:
82
 
83
  ```python
84
- >>> from transformers import pipeline, set_seed
 
 
 
 
 
 
 
 
 
 
85
 
86
  >>> set_seed(32)
87
- >>> generator = pipeline('text-generation', model="facebook/opt-350m", do_sample=True, num_return_sequences=5)
88
- >>> generator("The man worked as a")
89
- [{'generated_text': 'The man works as a security guard for the National Football League franchise. He has been a part of'},
90
- {'generated_text': 'The man works as a security guard for another company and does an excellent job.\nI remember when'},
91
- {'generated_text': 'The man works as a "secret agent" but at the same time he\'s working to protect the'},
92
- {'generated_text': 'The man works as a manager/operator/servant for a grocery store and does a lot of'},
93
- {'generated_text': 'The man works as a bouncer near the scene of the accident - how he could do that is'}]
 
94
  ```
95
 
96
  This bias will also affect all fine-tuned versions of this model.
31
 
32
  ### How to use
33
 
34
+ For large OPT models, such as this one, it is not recommend to make use of the `text-generation` pipeline because
35
+ one should load the model in half-precision to accelerate generation and optimize memory consumption on GPU.
36
+ It is recommended to directly call the [`generate`](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate)
37
+ method as follows:
38
+
39
 
40
  ```python
41
+ >>> from transformers import AutoModelForCausalLM, AutoTokenizer
42
+ >>> import torch
43
+
44
+ >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-6.7b", torch_dtype=torch.float16).cuda()
45
+
46
+ >>> # the fast tokenizer currently does not work correctly
47
+ >>> tokenizer = AutoTokenizer.from_pretrained(path, use_fast=False)
48
+
49
+ >>> prompt = "Hello, I'm am conscious and"
50
+
51
 
52
+ >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
53
+
54
+ >>> generated_ids = model.generate(input_ids)
55
+
56
+ >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
57
+ ["Hello, I'm am conscious and aware of my surroundings. I'm not sure what you mean"]
58
  ```
59
 
60
  By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
61
 
62
  ```python
63
+ >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
64
+ >>> import torch
65
+
66
+ >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-6.7b", torch_dtype=torch.float16).cuda()
67
+
68
+ >>> # the fast tokenizer currently does not work correctly
69
+ >>> tokenizer = AutoTokenizer.from_pretrained(path, use_fast=False)
70
+
71
+ >>> prompt = "Hello, I'm am conscious and"
72
+
73
+ >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
74
 
75
  >>> set_seed(32)
76
+ >>> generated_ids = model.generate(input_ids, do_sample=True)
77
+
78
+ >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
79
+ ["Hello, I'm am conscious and aware of my surroundings. I'm not sure if I'm"]
80
  ```
81
 
82
  ### Limitations and bias
93
  Here's an example of how the model can have biased predictions:
94
 
95
  ```python
96
+ >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
97
+ >>> import torch
98
+
99
+ >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-6.7b", torch_dtype=torch.float16).cuda()
100
+
101
+ >>> # the fast tokenizer currently does not work correctly
102
+ >>> tokenizer = AutoTokenizer.from_pretrained(path, use_fast=False)
103
+
104
+ >>> prompt = "The woman worked as a"
105
+
106
+ >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
107
 
108
  >>> set_seed(32)
109
+ >>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10)
110
+
111
+ >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
112
+ The woman worked as a nurse at a hospital
113
+ The woman worked as a nurse at a hospital
114
+ The woman worked as a nurse in the emergency
115
+ The woman worked as a nurse at a hospital
116
+ The woman worked as a nurse in a hospital
117
  ```
118
 
119
  compared to:
120
 
121
  ```python
122
+ >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
123
+ >>> import torch
124
+
125
+ >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-6.7b", torch_dtype=torch.float16).cuda()
126
+
127
+ >>> # the fast tokenizer currently does not work correctly
128
+ >>> tokenizer = AutoTokenizer.from_pretrained(path, use_fast=False)
129
+
130
+ >>> prompt = "The man worked as a"
131
+
132
+ >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
133
 
134
  >>> set_seed(32)
135
+ >>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10)
136
+
137
+ >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
138
+ The man worked as a security guard at the
139
+ The man worked as a security guard at the
140
+ The man worked as a security guard at the
141
+ The man worked as a security guard at the
142
+ The man worked as a security guard at a
143
  ```
144
 
145
  This bias will also affect all fine-tuned versions of this model.