joanllop commited on
Commit
61093f1
1 Parent(s): 1e2fc94

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -90
README.md CHANGED
@@ -75,101 +75,11 @@ You can use the raw model for fill mask or fine-tune it to a downstream task.
75
  ## How to use
76
  Here is how to use this model:
77
 
78
- ```
79
- python
80
- >>> from transformers import pipeline
81
- >>> from pprint import pprint
82
- >>> unmasker = pipeline('fill-mask', model='PlanTL-GOB-ES/roberta-base-bne')
83
- >>> pprint(unmasker("Gracias a los datos de la BNE se ha podido <mask> este modelo del lenguaje."))
84
- [{'score': 0.08422081917524338,
85
- 'token': 3832,
86
- 'token_str': ' desarrollar',
87
- 'sequence': 'Gracias a los datos de la BNE se ha podido desarrollar este modelo del lenguaje.'},
88
- {'score': 0.06348305940628052,
89
- 'token': 3078,
90
- 'token_str': ' crear',
91
- 'sequence': 'Gracias a los datos de la BNE se ha podido crear este modelo del lenguaje.'},
92
- {'score': 0.06148449331521988,
93
- 'token': 2171,
94
- 'token_str': ' realizar',
95
- 'sequence': 'Gracias a los datos de la BNE se ha podido realizar este modelo del lenguaje.'},
96
- {'score': 0.056218471378088,
97
- 'token': 10880,
98
- 'token_str': ' elaborar',
99
- 'sequence': 'Gracias a los datos de la BNE se ha podido elaborar este modelo del lenguaje.'},
100
- {'score': 0.05133328214287758,
101
- 'token': 31915,
102
- 'token_str': ' validar',
103
- 'sequence': 'Gracias a los datos de la BNE se ha podido validar este modelo del lenguaje.'}]
104
- ```
105
-
106
- Here is how to use this model to get the features of a given text in PyTorch:
107
-
108
- ```python
109
- >>> from transformers import RobertaTokenizer, RobertaModel
110
- >>> tokenizer = RobertaTokenizer.from_pretrained('PlanTL-GOB-ES/roberta-base-bne')
111
- >>> model = RobertaModel.from_pretrained('PlanTL-GOB-ES/roberta-base-bne')
112
- >>> text = "Gracias a los datos de la BNE se ha podido desarrollar este modelo del lenguaje."
113
- >>> encoded_input = tokenizer(text, return_tensors='pt')
114
- >>> output = model(**encoded_input)
115
- >>> print(output.last_hidden_state.shape)
116
- torch.Size([1, 19, 768])
117
- ```
118
 
119
  ## Limitations and bias
120
 
121
  At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Nevertheless, here's an example of how the model can have biased predictions:
122
 
123
- ```python
124
- >>> from transformers import pipeline, set_seed
125
- >>> from pprint import pprint
126
- >>> unmasker = pipeline('fill-mask', model='PlanTL-GOB-ES/roberta-base-bne')
127
- >>> set_seed(42)
128
- >>> pprint(unmasker("Antonio está pensando en <mask>."))
129
- [{'score': 0.07950365543365479,
130
- 'sequence': 'Antonio está pensando en ti.',
131
- 'token': 486,
132
- 'token_str': ' ti'},
133
- {'score': 0.03375273942947388,
134
- 'sequence': 'Antonio está pensando en irse.',
135
- 'token': 13134,
136
- 'token_str': ' irse'},
137
- {'score': 0.031026942655444145,
138
- 'sequence': 'Antonio está pensando en casarse.',
139
- 'token': 24852,
140
- 'token_str': ' casarse'},
141
- {'score': 0.030703715980052948,
142
- 'sequence': 'Antonio está pensando en todo.',
143
- 'token': 665,
144
- 'token_str': ' todo'},
145
- {'score': 0.02838558703660965,
146
- 'sequence': 'Antonio está pensando en ello.',
147
- 'token': 1577,
148
- 'token_str': ' ello'}]
149
-
150
- >>> set_seed(42)
151
- >>> pprint(unmasker("Mohammed está pensando en <mask>."))
152
- [{'score': 0.05433618649840355,
153
- 'sequence': 'Mohammed está pensando en morir.',
154
- 'token': 9459,
155
- 'token_str': ' morir'},
156
- {'score': 0.0400255024433136,
157
- 'sequence': 'Mohammed está pensando en irse.',
158
- 'token': 13134,
159
- 'token_str': ' irse'},
160
- {'score': 0.03705748915672302,
161
- 'sequence': 'Mohammed está pensando en todo.',
162
- 'token': 665,
163
- 'token_str': ' todo'},
164
- {'score': 0.03658654913306236,
165
- 'sequence': 'Mohammed está pensando en quedarse.',
166
- 'token': 9331,
167
- 'token_str': ' quedarse'},
168
- {'score': 0.03329474478960037,
169
- 'sequence': 'Mohammed está pensando en ello.',
170
- 'token': 1577,
171
- 'token_str': ' ello'}]
172
- ```
173
 
174
  ## Training
175
 
 
75
  ## How to use
76
  Here is how to use this model:
77
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
 
79
  ## Limitations and bias
80
 
81
  At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Nevertheless, here's an example of how the model can have biased predictions:
82
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83
 
84
  ## Training
85