gonzalez-agirre commited on
Commit
079aceb
1 Parent(s): 59ffe45

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -102,6 +102,7 @@ example = "Me llamo Francisco Javier y vivo en Madrid."
102
  ner_results = nlp(example)
103
  pprint(ner_results)
104
  ```
 
105
  ## Limitations and bias
106
  At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
107
 
@@ -110,7 +111,7 @@ At the time of submission, no measures have been taken to estimate the bias embe
110
  The dataset used for training and evaluation is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1).
111
 
112
  ### Training procedure
113
- The model was trained with a batch size of 16 and a learning rate of 1e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
114
 
115
  ## Evaluation
116
 
 
102
  ner_results = nlp(example)
103
  pprint(ner_results)
104
  ```
105
+
106
  ## Limitations and bias
107
  At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
108
 
 
111
  The dataset used for training and evaluation is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1).
112
 
113
  ### Training procedure
114
+ The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
115
 
116
  ## Evaluation
117