mmarimon commited on
Commit
5ef04bb
1 Parent(s): 98fc331

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -22
README.md CHANGED
@@ -48,30 +48,37 @@ widget:
48
  # Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for Text Classification.
49
 
50
  ## Table of Contents
51
- - [Model Description](#model-description)
52
- - [Intended Uses and Limitations](#intended-uses-and-limitations)
53
- - [How to Use](#how-to-use)
 
 
 
 
54
  - [Training](#training)
55
- - [Training Data](#training-data)
56
- - [Training Procedure](#training-procedure)
57
  - [Evaluation](#evaluation)
58
- - [Variable and Metrics](#variable-and-metrics)
59
- - [Evaluation Results](#evaluation-results)
60
- - [Licensing Information](#licensing-information)
61
- - [Citation Information](#citation-information)
62
- - [Funding](#funding)
63
- - [Contributions](#contributions)
64
- - [Disclaimer](#disclaimer)
 
 
 
65
 
66
  ## Model description
67
 
68
  The **roberta-base-ca-v2-cased-wikicat-ca** is a Text Classification model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).
69
 
70
- ## Intended Uses and Limitations
71
 
72
  **roberta-base-ca-v2-cased-wikicat-ca** model can be used to classify texts. The model is limited by its training dataset and may not generalize well for all use cases.
73
 
74
- ## How to Use
75
 
76
  Here is how to use this model:
77
 
@@ -86,17 +93,20 @@ tc_results = nlp(example)
86
  pprint(tc_results)
87
  ```
88
 
 
 
 
89
  ## Training
90
 
91
  ### Training data
92
  We used the TC dataset in Catalan called [WikiCAT_ca](https://huggingface.co/datasets/projecte-aina/WikiCAT_ca) for training and evaluation.
93
 
94
- ### Training Procedure
95
  The model was trained with a batch size of 16 and three learning rates (1e-5, 3e-5, 5e-5) for 10 epochs. We then selected the best learning rate (3e-5) and checkpoint (epoch 3, step 1857) using the downstream task metric in the corresponding development set.
96
 
97
  ## Evaluation
98
 
99
- ### Variable and Metrics
100
 
101
  This model was finetuned maximizing F1 (weighted) score.
102
 
@@ -109,19 +119,24 @@ We evaluated the _roberta-base-ca-v2-cased-wikicat-ca_ on the WikiCAT_ca dev set
109
 
110
  For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
111
 
112
- ## Licensing Information
113
 
114
- [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
115
 
116
- ## Citation Information
 
117
 
 
 
 
 
 
 
 
 
118
 
119
  ### Funding
120
  This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
121
 
122
- ## Contributions
123
-
124
- [N/A]
125
 
126
  ## Disclaimer
127
 
 
48
  # Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for Text Classification.
49
 
50
  ## Table of Contents
51
+ <details>
52
+ <summary>Click to expand</summary>
53
+
54
+ - [Model description](#model-description)
55
+ - [Intended uses and limitations](#intended-uses-and-limitations)
56
+ - [How to use](#how-to-use)
57
+ - [Limitations and bias](#limitations-and-bias)
58
  - [Training](#training)
59
+ - [Training data](#training-data)
60
+ - [Training procedure](#training-procedure)
61
  - [Evaluation](#evaluation)
62
+ - [Variable and metrics](#variable-and-metrics)
63
+ - [Evaluation results](#evaluation-results)
64
+ - [Author](#author)
65
+ - [Contact information](#contact-information)
66
+ - [Copyright](#copyright)
67
+ - [Licensing information](#licensing-information)
68
+ - [Funding](#funding)
69
+ - [Disclaimer](#disclaimer)
70
+
71
+ </details>
72
 
73
  ## Model description
74
 
75
  The **roberta-base-ca-v2-cased-wikicat-ca** is a Text Classification model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).
76
 
77
+ ## Intended uses and limitations
78
 
79
  **roberta-base-ca-v2-cased-wikicat-ca** model can be used to classify texts. The model is limited by its training dataset and may not generalize well for all use cases.
80
 
81
+ ## How to use
82
 
83
  Here is how to use this model:
84
 
 
93
  pprint(tc_results)
94
  ```
95
 
96
+ ## Limitations and bias
97
+ At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
98
+
99
  ## Training
100
 
101
  ### Training data
102
  We used the TC dataset in Catalan called [WikiCAT_ca](https://huggingface.co/datasets/projecte-aina/WikiCAT_ca) for training and evaluation.
103
 
104
+ ### Training procedure
105
  The model was trained with a batch size of 16 and three learning rates (1e-5, 3e-5, 5e-5) for 10 epochs. We then selected the best learning rate (3e-5) and checkpoint (epoch 3, step 1857) using the downstream task metric in the corresponding development set.
106
 
107
  ## Evaluation
108
 
109
+ ### Variable and metrics
110
 
111
  This model was finetuned maximizing F1 (weighted) score.
112
 
 
119
 
120
  For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
121
 
 
122
 
123
+ ## Additional information
124
 
125
+ ### Author
126
+ Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
127
 
128
+ ### Contact information
129
+ For further information, send an email to aina@bsc.es
130
+
131
+ ### Copyright
132
+ Copyright by Text Mining Unit - Barcelona Supercomputing Center (2022)
133
+
134
+ ### Licensing information
135
+ [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
136
 
137
  ### Funding
138
  This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
139
 
 
 
 
140
 
141
  ## Disclaimer
142