Update README.md
Browse files
README.md
CHANGED
@@ -32,13 +32,13 @@ The encoder concatenates the question and text, and the decoder generates the an
|
|
32 |
| Model | ROUGEL-F1 |
|
33 |
| ---------------- | ---------- |
|
34 |
| t5 (for original english dataset, baseline) | 0.551 |
|
35 |
-
| ptt5-ptpt-
|
36 |
|
37 |
## Load Model and Tokenizer
|
38 |
|
39 |
```py
|
40 |
>>> from transformers import T5ForConditionalGeneration, T5Tokenizer
|
41 |
-
>>> model = T5ForConditionalGeneration.from_pretrained("benjleite/ptt5-ptpt-
|
42 |
>>> tokenizer = T5Tokenizer.from_pretrained("unicamp-dl/ptt5-base-portuguese-vocab", model_max_length=512)
|
43 |
```
|
44 |
**Important Note**: Special tokens need to be added and model tokens must be resized:
|
|
|
32 |
| Model | ROUGEL-F1 |
|
33 |
| ---------------- | ---------- |
|
34 |
| t5 (for original english dataset, baseline) | 0.551 |
|
35 |
+
| ptt5-ptpt-qa (for the portuguese machine-translated dataset) | 0.436 |
|
36 |
|
37 |
## Load Model and Tokenizer
|
38 |
|
39 |
```py
|
40 |
>>> from transformers import T5ForConditionalGeneration, T5Tokenizer
|
41 |
+
>>> model = T5ForConditionalGeneration.from_pretrained("benjleite/ptt5-ptpt-qa")
|
42 |
>>> tokenizer = T5Tokenizer.from_pretrained("unicamp-dl/ptt5-base-portuguese-vocab", model_max_length=512)
|
43 |
```
|
44 |
**Important Note**: Special tokens need to be added and model tokens must be resized:
|