sofiaoliveira
commited on
Commit
•
3817eb9
1
Parent(s):
1a10660
Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ metrics:
|
|
15 |
- F1 Measure
|
16 |
---
|
17 |
|
18 |
-
#
|
19 |
|
20 |
## Model description
|
21 |
|
@@ -44,6 +44,7 @@ For more information, please see the accompanying article (See BibTeX entry and
|
|
44 |
#### How to use
|
45 |
|
46 |
To use the transformers portion of this model:
|
|
|
47 |
```python
|
48 |
from transformers import AutoTokenizer, AutoModel
|
49 |
|
@@ -60,12 +61,6 @@ To use the full SRL model (transformers portion + a decoding layer), refer to th
|
|
60 |
- The models were trained only for 5 epochs.
|
61 |
- The English data was preprocessed to match the Portuguese data, so there are some differences in role attributions and some roles were removed from the data.
|
62 |
|
63 |
-
|
64 |
-
## Training data
|
65 |
-
|
66 |
-
Pretrained weights were left identical to the original model [`xlm-roberta-large`](https://huggingface.co/xlm-roberta-large). A randomly initialized embeddings layer for "token_type_ids" was added.
|
67 |
-
|
68 |
-
|
69 |
## Training procedure
|
70 |
|
71 |
The models were trained on the CoNLL-2012 dataset, preprocessed to match the Portuguese PropBank.Br data. They were tested on the PropBank.Br data set as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
|
|
|
15 |
- F1 Measure
|
16 |
---
|
17 |
|
18 |
+
# XLM-R large fine-tuned on English semantic role labeling
|
19 |
|
20 |
## Model description
|
21 |
|
|
|
44 |
#### How to use
|
45 |
|
46 |
To use the transformers portion of this model:
|
47 |
+
|
48 |
```python
|
49 |
from transformers import AutoTokenizer, AutoModel
|
50 |
|
|
|
61 |
- The models were trained only for 5 epochs.
|
62 |
- The English data was preprocessed to match the Portuguese data, so there are some differences in role attributions and some roles were removed from the data.
|
63 |
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
## Training procedure
|
65 |
|
66 |
The models were trained on the CoNLL-2012 dataset, preprocessed to match the Portuguese PropBank.Br data. They were tested on the PropBank.Br data set as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
|