sofiaoliveira commited on
Commit
2376e9e
1 Parent(s): 9fd4ca2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -6
README.md CHANGED
@@ -60,14 +60,9 @@ To use the full SRL model (transformers portion + a decoding layer), refer to th
60
  - The English data was preprocessed to match the Portuguese data, so there are some differences in role attributions and some roles were removed from the data.
61
 
62
 
63
- ## Training data
64
-
65
- Pretrained weights were left identical to the original model [`xlm-roberta-large`](https://huggingface.co/xlm-roberta-large). A randomly initialized embeddings layer for "token_type_ids" was added.
66
-
67
-
68
  ## Training procedure
69
 
70
- The models were trained on the CoNLL-2012 dataset, preprocessed to match the Portuguese PropBank.Br data, using Cross-Validation. They were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
71
 
72
  ## Eval results
73
 
 
60
  - The English data was preprocessed to match the Portuguese data, so there are some differences in role attributions and some roles were removed from the data.
61
 
62
 
 
 
 
 
 
63
  ## Training procedure
64
 
65
+ The model was first fine-tuned on the CoNLL-2012 dataset, preprocessed to match the Portuguese PropBank.Br data; then it was fine-tuned in the PropBank.Br dataset using 10-fold Cross-Validation. The resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
66
 
67
  ## Eval results
68