guilhermelmello commited on
Commit
34c17a8
1 Parent(s): ec08cc2

Fix: usage example

Browse files
Files changed (1) hide show
  1. README.md +180 -180
README.md CHANGED
@@ -1,180 +1,180 @@
1
- ---
2
- pretty_name: Carolina
3
- annotations_creators:
4
- - no-annotation
5
- language_creators:
6
- - crowdsourced
7
- languages:
8
- - pt-BR
9
- licenses:
10
- - cc-by-nc-sa-4.0
11
- multilinguality:
12
- - monolingual
13
- size_categories:
14
- - 1B<n<10B
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - sequence-modeling
19
- task_ids:
20
- - language-modeling
21
- ---
22
-
23
- # Dataset Card for Corpus Carolina
24
-
25
-
26
- ## Table of Contents
27
- - [Table of Contents](#table-of-contents)
28
- - [Dataset Description](#dataset-description)
29
- - [Dataset Summary](#dataset-summary)
30
- - [Supported Tasks](#supported-tasks)
31
- - [Languages](#languages)
32
- - [Dataset Structure](#dataset-structure)
33
- - [Data Instances](#data-instances)
34
- - [Data Fields](#data-fields)
35
- - [Data Splits](#data-splits)
36
- - [Additional Information](#additional-information)
37
- - [Dataset Curators](#dataset-curators)
38
- - [Licensing Information](#licensing-information)
39
- - [Citation Information](#citation-information)
40
-
41
-
42
- ## Dataset Description
43
-
44
- - **Homepage:** [sites.usp.br/corpuscarolina](https://sites.usp.br/corpuscarolina/)
45
- - **Current Version:** 1.1 (Ada)
46
- - **Point of Contact:** [Guilherme L. Mello](mailto:guilhermelmello@ime.usp.br)
47
-
48
-
49
- ### Dataset Summary
50
-
51
- Carolina is an Open Corpus for Linguistics and Artificial Intelligence with a
52
- robust volume of texts of varied typology in contemporary Brazilian Portuguese
53
- (1970-2021). This corpus contains documents and texts extracted from the web
54
- and includes information (metadata) about its provenance and tipology.
55
-
56
- The documents are clustered into taxonomies and the corpus can be loaded in complete or taxonomy modes. To load a single taxonomy, it is possible to pass a code as a parameter to the loading script (see the example bellow). Codes are 3-letters string and possible values are:
57
-
58
- - `dat` : datasets and other corpora;
59
- - `jud` : judicial branch;
60
- - `leg` : legislative branch;
61
- - `pub` : public domain works;
62
- - `soc` : social media;
63
- - `uni` : university domains;
64
- - `wik` : wikis.
65
-
66
- **Example:**
67
-
68
- ```python
69
- from datasets import load_script
70
-
71
- # to load all taxonomies
72
- corpus_carolina = load_scrip("carolina-c4ai/corpus-carolina")
73
-
74
- # to load social media documents
75
- social_media = load_scrip("carolina-c4ai/corpus-carolina", taxonomy="soc")
76
- ```
77
-
78
- ### Supported Tasks
79
-
80
- Carolina corpus was compiled for academic purposes,
81
- namely linguistic and computational analysis.
82
-
83
-
84
- ### Languages
85
-
86
- Contemporary Brazilian Portuguese (1970-2021).
87
-
88
- ## Dataset Structure
89
-
90
- Files are stored inside `corpus` folder with a subfolder
91
- for each taxonomy. Every file folows a XML structure
92
- (TEI P5) and contains multiple extracted documents. For
93
- each document, the text and metadata are exposed as
94
- `text` and `meta` features, respectively.
95
-
96
- ### Data Instances
97
-
98
- Every instance have the following structure.
99
-
100
- ```
101
- {
102
- "meta": datasets.Value("string"),
103
- "text": datasets.Value("string")
104
- }
105
- ```
106
-
107
- | Code | Taxonomy | Instances | Size |
108
- |:----:|:---------------------------|----------:|-------:|
109
- | | **Total** | 1745187 | 7.3 GB |
110
- | dat | Datasets and other Corpora | 1098717 | 3.3 GB |
111
- | wik | Wikis | 603968 | 2.6 GB |
112
- | jud | Judicial Branch | 38068 | 1.4 GB |
113
- | leg | Legislative Branch | 14 | 20 MB |
114
- | soc | Social Media | 3449 | 15 MB |
115
- | uni | University Domains | 945 | 9.4 MB |
116
- | pub | Public Domain Works | 26 | 3.6 MB |
117
- ||
118
-
119
-
120
- ### Data Fields
121
-
122
- - `meta`: a XML string with a TEI conformant `teiHeader` tag. It is exposed as text and needs to be parsed in order to access the actual metada;
123
- - `text`: a string containing the extracted document.
124
-
125
- ### Data Splits
126
-
127
- As a general corpus, Carolina does not have splits. In order to load the dataset, it is used `corpus` as its single split.
128
-
129
-
130
- ## Additional Information
131
-
132
-
133
- ### Dataset Curators
134
-
135
- The Corpus Carolina is developed by a multidisciplinary
136
- team of linguists and computer scientists, members of the
137
- Virtual Laboratory of Digital Humanities - LaViHD and the Artificial Intelligence Center of the University of São Paulo - C4AI.
138
-
139
-
140
- ### Licensing Information
141
-
142
- The Open Corpus for Linguistics and Artificial Intelligence (Carolina) was
143
- compiled for academic purposes, namely linguistic and computational analysis.
144
- It is composed of texts assembled in various digital repositories, whose
145
- licenses are multiple and therefore should be observed when making use of the
146
- corpus. The Carolina headers are licensed under Creative Commons
147
- Attribution-NonCommercial-ShareAlike 4.0 International."
148
-
149
-
150
- ### Citation Information
151
-
152
- ```
153
- @misc{corpusCarolinaV1.1,
154
- title={
155
- Carolina:
156
- The Open Corpus for Linguistics and Artificial Intelligence
157
- },
158
- author={
159
- Finger, Marcelo and
160
- Paixão de Sousa, Maria Clara and
161
- Namiuti, Cristiane and
162
- Martins do Monte, Vanessa and
163
- Costa, Aline Silva and
164
- Serras, Felipe Ribas and
165
- Sturzeneker, Mariana Lourenço and
166
- Guets, Raquel de Paula and
167
- Mesquita, Renata Morais and
168
- Mello, Guilherme Lamartine de and
169
- Crespo, Maria Clara Ramos Morales and
170
- Rocha, Maria Lina de Souza Jeannine and
171
- Brasil, Patrícia and
172
- Silva, Mariana Marques da and
173
- Palma, Mayara Feliciano
174
- },
175
- howpublished={\url{
176
- https://sites.usp.br/corpuscarolina/corpus}},
177
- year={2022},
178
- note={Version 1.1 (Ada)},
179
- }
180
- ```
1
+ ---
2
+ pretty_name: Carolina
3
+ annotations_creators:
4
+ - no-annotation
5
+ language_creators:
6
+ - crowdsourced
7
+ languages:
8
+ - pt-BR
9
+ licenses:
10
+ - cc-by-nc-sa-4.0
11
+ multilinguality:
12
+ - monolingual
13
+ size_categories:
14
+ - 1B<n<10B
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - sequence-modeling
19
+ task_ids:
20
+ - language-modeling
21
+ ---
22
+
23
+ # Dataset Card for Corpus Carolina
24
+
25
+
26
+ ## Table of Contents
27
+ - [Table of Contents](#table-of-contents)
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks](#supported-tasks)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
+ - [Additional Information](#additional-information)
37
+ - [Dataset Curators](#dataset-curators)
38
+ - [Licensing Information](#licensing-information)
39
+ - [Citation Information](#citation-information)
40
+
41
+
42
+ ## Dataset Description
43
+
44
+ - **Homepage:** [sites.usp.br/corpuscarolina](https://sites.usp.br/corpuscarolina/)
45
+ - **Current Version:** 1.1 (Ada)
46
+ - **Point of Contact:** [Guilherme L. Mello](mailto:guilhermelmello@ime.usp.br)
47
+
48
+
49
+ ### Dataset Summary
50
+
51
+ Carolina is an Open Corpus for Linguistics and Artificial Intelligence with a
52
+ robust volume of texts of varied typology in contemporary Brazilian Portuguese
53
+ (1970-2021). This corpus contains documents and texts extracted from the web
54
+ and includes information (metadata) about its provenance and tipology.
55
+
56
+ The documents are clustered into taxonomies and the corpus can be loaded in complete or taxonomy modes. To load a single taxonomy, it is possible to pass a code as a parameter to the loading script (see the example bellow). Codes are 3-letters string and possible values are:
57
+
58
+ - `dat` : datasets and other corpora;
59
+ - `jud` : judicial branch;
60
+ - `leg` : legislative branch;
61
+ - `pub` : public domain works;
62
+ - `soc` : social media;
63
+ - `uni` : university domains;
64
+ - `wik` : wikis.
65
+
66
+ Usage Example:
67
+
68
+ ```python
69
+ from datasets import load_dataset
70
+
71
+ # to load all taxonomies
72
+ corpus_carolina = load_dataset("carolina-c4ai/corpus-carolina")
73
+
74
+ # to load social media documents
75
+ social_media = load_dataset("carolina-c4ai/corpus-carolina", taxonomy="soc")
76
+ ```
77
+
78
+ ### Supported Tasks
79
+
80
+ Carolina corpus was compiled for academic purposes,
81
+ namely linguistic and computational analysis.
82
+
83
+
84
+ ### Languages
85
+
86
+ Contemporary Brazilian Portuguese (1970-2021).
87
+
88
+ ## Dataset Structure
89
+
90
+ Files are stored inside `corpus` folder with a subfolder
91
+ for each taxonomy. Every file folows a XML structure
92
+ (TEI P5) and contains multiple extracted documents. For
93
+ each document, the text and metadata are exposed as
94
+ `text` and `meta` features, respectively.
95
+
96
+ ### Data Instances
97
+
98
+ Every instance have the following structure.
99
+
100
+ ```
101
+ {
102
+ "meta": datasets.Value("string"),
103
+ "text": datasets.Value("string")
104
+ }
105
+ ```
106
+
107
+ | Code | Taxonomy | Instances | Size |
108
+ |:----:|:---------------------------|----------:|-------:|
109
+ | | **Total** | 1745187 | 7.3 GB |
110
+ | dat | Datasets and other Corpora | 1098717 | 3.3 GB |
111
+ | wik | Wikis | 603968 | 2.6 GB |
112
+ | jud | Judicial Branch | 38068 | 1.4 GB |
113
+ | leg | Legislative Branch | 14 | 20 MB |
114
+ | soc | Social Media | 3449 | 15 MB |
115
+ | uni | University Domains | 945 | 9.4 MB |
116
+ | pub | Public Domain Works | 26 | 3.6 MB |
117
+ ||
118
+
119
+
120
+ ### Data Fields
121
+
122
+ - `meta`: a XML string with a TEI conformant `teiHeader` tag. It is exposed as text and needs to be parsed in order to access the actual metada;
123
+ - `text`: a string containing the extracted document.
124
+
125
+ ### Data Splits
126
+
127
+ As a general corpus, Carolina does not have splits. In order to load the dataset, it is used `corpus` as its single split.
128
+
129
+
130
+ ## Additional Information
131
+
132
+
133
+ ### Dataset Curators
134
+
135
+ The Corpus Carolina is developed by a multidisciplinary
136
+ team of linguists and computer scientists, members of the
137
+ Virtual Laboratory of Digital Humanities - LaViHD and the Artificial Intelligence Center of the University of São Paulo - C4AI.
138
+
139
+
140
+ ### Licensing Information
141
+
142
+ The Open Corpus for Linguistics and Artificial Intelligence (Carolina) was
143
+ compiled for academic purposes, namely linguistic and computational analysis.
144
+ It is composed of texts assembled in various digital repositories, whose
145
+ licenses are multiple and therefore should be observed when making use of the
146
+ corpus. The Carolina headers are licensed under Creative Commons
147
+ Attribution-NonCommercial-ShareAlike 4.0 International."
148
+
149
+
150
+ ### Citation Information
151
+
152
+ ```
153
+ @misc{corpusCarolinaV1.1,
154
+ title={
155
+ Carolina:
156
+ The Open Corpus for Linguistics and Artificial Intelligence
157
+ },
158
+ author={
159
+ Finger, Marcelo and
160
+ Paixão de Sousa, Maria Clara and
161
+ Namiuti, Cristiane and
162
+ Martins do Monte, Vanessa and
163
+ Costa, Aline Silva and
164
+ Serras, Felipe Ribas and
165
+ Sturzeneker, Mariana Lourenço and
166
+ Guets, Raquel de Paula and
167
+ Mesquita, Renata Morais and
168
+ Mello, Guilherme Lamartine de and
169
+ Crespo, Maria Clara Ramos Morales and
170
+ Rocha, Maria Lina de Souza Jeannine and
171
+ Brasil, Patrícia and
172
+ Silva, Mariana Marques da and
173
+ Palma, Mayara Feliciano
174
+ },
175
+ howpublished={\url{
176
+ https://sites.usp.br/corpuscarolina/corpus}},
177
+ year={2022},
178
+ note={Version 1.1 (Ada)},
179
+ }
180
+ ```