system HF staff commited on
Commit
bad90f5
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - pt
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - structure-prediction
18
+ task_ids:
19
+ - named-entity-recognition
20
+ ---
21
+
22
+ # Dataset Card for leNER-br
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** [leNER-BR homepage](https://cic.unb.br/~teodecampos/LeNER-Br/)
50
+ - **Repository:** [leNER-BR repository](https://github.com/peluz/lener-br)
51
+ - **Paper:** [leNER-BR: Long Form Question Answering](https://cic.unb.br/~teodecampos/LeNER-Br/luz_etal_propor2018.pdf)
52
+ - **Point of Contact:** [Pedro H. Luz de Araujo](mailto:pedrohluzaraujo@gmail.com)
53
+
54
+ ### Dataset Summary
55
+
56
+ LeNER-Br is a Portuguese language dataset for named entity recognition
57
+ applied to legal documents. LeNER-Br consists entirely of manually annotated
58
+ legislation and legal cases texts and contains tags for persons, locations,
59
+ time entities, organizations, legislation and legal cases.
60
+ To compose the dataset, 66 legal documents from several Brazilian Courts were
61
+ collected. Courts of superior and state levels were considered, such as Supremo
62
+ Tribunal Federal, Superior Tribunal de Justiça, Tribunal de Justiça de Minas
63
+ Gerais and Tribunal de Contas da União. In addition, four legislation documents
64
+ were collected, such as "Lei Maria da Penha", giving a total of 70 documents
65
+
66
+ ### Supported Tasks and Leaderboards
67
+
68
+ [More Information Needed]
69
+
70
+ ### Languages
71
+
72
+ The language supported is Portuguese.
73
+
74
+ ## Dataset Structure
75
+
76
+ ### Data Instances
77
+
78
+ An example from the dataset looks as follows:
79
+
80
+ ```
81
+ {
82
+ "id": "0",
83
+ "ner_tags": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0],
84
+ "tokens": [
85
+ "EMENTA", ":", "APELAÇÃO", "CÍVEL", "-", "AÇÃO", "DE", "INDENIZAÇÃO", "POR", "DANOS", "MORAIS", "-", "PRELIMINAR", "-", "ARGUIDA", "PELO", "MINISTÉRIO", "PÚBLICO", "EM", "GRAU", "RECURSAL"]
86
+ }
87
+ ```
88
+ ### Data Fields
89
+
90
+ - `id`: id of the sample
91
+ - `tokens`: the tokens of the example text
92
+ - `ner_tags`: the NER tags of each token
93
+
94
+ The NER tags correspond to this list:
95
+ ```
96
+ "O", "B-ORGANIZACAO", "I-ORGANIZACAO", "B-PESSOA", "I-PESSOA", "B-TEMPO", "I-TEMPO", "B-LOCAL", "I-LOCAL", "B-LEGISLACAO", "I-LEGISLACAO", "B-JURISPRUDENCIA", "I-JURISPRUDENCIA"
97
+ ```
98
+ The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word.
99
+
100
+ ### Data Splits
101
+
102
+ The data is split into train, validation and test set. The split sizes are as follow:
103
+
104
+ | Train | Val | Test |
105
+ | ------ | ----- | ---- |
106
+ | 7828 | 1177 | 1390 |
107
+
108
+ ## Dataset Creation
109
+
110
+ ### Curation Rationale
111
+
112
+ [More Information Needed]
113
+
114
+ ### Source Data
115
+
116
+ #### Initial Data Collection and Normalization
117
+
118
+ [More Information Needed]
119
+
120
+ #### Who are the source language producers?
121
+
122
+ [More Information Needed]
123
+
124
+ ### Annotations
125
+
126
+ #### Annotation process
127
+
128
+ [More Information Needed]
129
+
130
+ #### Who are the annotators?
131
+
132
+ [More Information Needed]
133
+
134
+ ### Personal and Sensitive Information
135
+
136
+ [More Information Needed]
137
+
138
+ ## Considerations for Using the Data
139
+
140
+ ### Social Impact of Dataset
141
+
142
+ [More Information Needed]
143
+
144
+ ### Discussion of Biases
145
+
146
+ [More Information Needed]
147
+
148
+ ### Other Known Limitations
149
+
150
+ [More Information Needed]
151
+
152
+ ## Additional Information
153
+
154
+ ### Dataset Curators
155
+
156
+ [More Information Needed]
157
+
158
+ ### Licensing Information
159
+
160
+ [More Information Needed]
161
+
162
+ ### Citation Information
163
+
164
+ ```
165
+ @inproceedings{luz_etal_propor2018,
166
+ author = {Pedro H. {Luz de Araujo} and Te\'{o}filo E. {de Campos} and
167
+ Renato R. R. {de Oliveira} and Matheus Stauffer and
168
+ Samuel Couto and Paulo Bermejo},
169
+ title = {{LeNER-Br}: a Dataset for Named Entity Recognition in {Brazilian} Legal Text},
170
+ booktitle = {International Conference on the Computational Processing of Portuguese ({PROPOR})},
171
+ publisher = {Springer},
172
+ series = {Lecture Notes on Computer Science ({LNCS})},
173
+ pages = {313--323},
174
+ year = {2018},
175
+ month = {September 24-26},
176
+ address = {Canela, RS, Brazil},
177
+ doi = {10.1007/978-3-319-99722-3_32},
178
+ url = {https://cic.unb.br/~teodecampos/LeNER-Br/},
179
+ }
180
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"lener_br": {"description": "\nLeNER-Br is a Portuguese language dataset for named entity recognition \napplied to legal documents. LeNER-Br consists entirely of manually annotated \nlegislation and legal cases texts and contains tags for persons, locations, \ntime entities, organizations, legislation and legal cases.\nTo compose the dataset, 66 legal documents from several Brazilian Courts were\ncollected. Courts of superior and state levels were considered, such as Supremo\nTribunal Federal, Superior Tribunal de Justi\u00e7a, Tribunal de Justi\u00e7a de Minas\nGerais and Tribunal de Contas da Uni\u00e3o. In addition, four legislation documents\nwere collected, such as \"Lei Maria da Penha\", giving a total of 70 documents\n", "citation": "\n@inproceedings{luz_etal_propor2018,\n author = {Pedro H. {Luz de Araujo} and Te'{o}filo E. {de Campos} and\n Renato R. R. {de Oliveira} and Matheus Stauffer and\n Samuel Couto and Paulo Bermejo},\n title = {{LeNER-Br}: a Dataset for Named Entity Recognition in {Brazilian} Legal Text},\n booktitle = {International Conference on the Computational Processing of Portuguese ({PROPOR})},\n publisher = {Springer},\n series = {Lecture Notes on Computer Science ({LNCS})},\n pages = {313--323},\n year = {2018},\n month = {September 24-26},\n address = {Canela, RS, Brazil},\t \n doi = {10.1007/978-3-319-99722-3_32},\n url = {https://cic.unb.br/~teodecampos/LeNER-Br/},\n}\n", "homepage": "https://cic.unb.br/~teodecampos/LeNER-Br/", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 13, "names": ["O", "B-ORGANIZACAO", "I-ORGANIZACAO", "B-PESSOA", "I-PESSOA", "B-TEMPO", "I-TEMPO", "B-LOCAL", "I-LOCAL", "B-LEGISLACAO", "I-LEGISLACAO", "B-JURISPRUDENCIA", "I-JURISPRUDENCIA"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "lener_br", "config_name": "lener_br", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 3984189, "num_examples": 7828, "dataset_name": "lener_br"}, "validation": {"name": "validation", "num_bytes": 719433, "num_examples": 1177, "dataset_name": "lener_br"}, "test": {"name": "test", "num_bytes": 823708, "num_examples": 1390, "dataset_name": "lener_br"}}, "download_checksums": {"https://github.com/peluz/lener-br/raw/master/leNER-Br/train/train.conll": {"num_bytes": 2142199, "checksum": "6fdf9066333c84565f9e3d28ee8f0f519336bece69b63f8d78b8de0fe96dcd47"}, "https://github.com/peluz/lener-br/raw/master/leNER-Br/dev/dev.conll": {"num_bytes": 402497, "checksum": "7e350feb828198031e57c21d6aadbf8dac92b19a684e45d7081c6cb491e2063b"}, "https://github.com/peluz/lener-br/raw/master/leNER-Br/test/test.conll": {"num_bytes": 438441, "checksum": "f90cd26a31afc2d1f132c4473d40c26d2283a98b374025fa5b5985b723dce825"}}, "download_size": 2983137, "post_processing_size": null, "dataset_size": 5527330, "size_in_bytes": 8510467}}
dummy/lener_br/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68278d9bf7fc56e7acc135ae56409e068426e0020d8188dd07c2c33bf387aac9
3
+ size 1290
lener_br.py ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """LeNER-Br dataset"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import logging
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """
25
+ @inproceedings{luz_etal_propor2018,
26
+ author = {Pedro H. {Luz de Araujo} and Te\'{o}filo E. {de Campos} and
27
+ Renato R. R. {de Oliveira} and Matheus Stauffer and
28
+ Samuel Couto and Paulo Bermejo},
29
+ title = {{LeNER-Br}: a Dataset for Named Entity Recognition in {Brazilian} Legal Text},
30
+ booktitle = {International Conference on the Computational Processing of Portuguese ({PROPOR})},
31
+ publisher = {Springer},
32
+ series = {Lecture Notes on Computer Science ({LNCS})},
33
+ pages = {313--323},
34
+ year = {2018},
35
+ month = {September 24-26},
36
+ address = {Canela, RS, Brazil},
37
+ doi = {10.1007/978-3-319-99722-3_32},
38
+ url = {https://cic.unb.br/~teodecampos/LeNER-Br/},
39
+ }
40
+ """
41
+
42
+ _DESCRIPTION = """
43
+ LeNER-Br is a Portuguese language dataset for named entity recognition
44
+ applied to legal documents. LeNER-Br consists entirely of manually annotated
45
+ legislation and legal cases texts and contains tags for persons, locations,
46
+ time entities, organizations, legislation and legal cases.
47
+ To compose the dataset, 66 legal documents from several Brazilian Courts were
48
+ collected. Courts of superior and state levels were considered, such as Supremo
49
+ Tribunal Federal, Superior Tribunal de Justiça, Tribunal de Justiça de Minas
50
+ Gerais and Tribunal de Contas da União. In addition, four legislation documents
51
+ were collected, such as "Lei Maria da Penha", giving a total of 70 documents
52
+ """
53
+
54
+ _HOMEPAGE = "https://cic.unb.br/~teodecampos/LeNER-Br/"
55
+
56
+ _URL = "https://github.com/peluz/lener-br/raw/master/leNER-Br/"
57
+ _TRAINING_FILE = "train/train.conll"
58
+ _DEV_FILE = "dev/dev.conll"
59
+ _TEST_FILE = "test/test.conll"
60
+
61
+
62
+ class LenerBr(datasets.GeneratorBasedBuilder):
63
+ """LeNER-Br dataset"""
64
+
65
+ VERSION = datasets.Version("1.0.0")
66
+
67
+ BUILDER_CONFIGS = [
68
+ datasets.BuilderConfig(name="lener_br", version=VERSION, description="LeNER-Br dataset"),
69
+ ]
70
+
71
+ def _info(self):
72
+ return datasets.DatasetInfo(
73
+ description=_DESCRIPTION,
74
+ features=datasets.Features(
75
+ {
76
+ "id": datasets.Value("string"),
77
+ "tokens": datasets.Sequence(datasets.Value("string")),
78
+ "ner_tags": datasets.Sequence(
79
+ datasets.features.ClassLabel(
80
+ names=[
81
+ "O",
82
+ "B-ORGANIZACAO",
83
+ "I-ORGANIZACAO",
84
+ "B-PESSOA",
85
+ "I-PESSOA",
86
+ "B-TEMPO",
87
+ "I-TEMPO",
88
+ "B-LOCAL",
89
+ "I-LOCAL",
90
+ "B-LEGISLACAO",
91
+ "I-LEGISLACAO",
92
+ "B-JURISPRUDENCIA",
93
+ "I-JURISPRUDENCIA",
94
+ ]
95
+ )
96
+ ),
97
+ }
98
+ ),
99
+ supervised_keys=None,
100
+ homepage="https://cic.unb.br/~teodecampos/LeNER-Br/",
101
+ citation=_CITATION,
102
+ )
103
+
104
+ def _split_generators(self, dl_manager):
105
+ """Returns SplitGenerators."""
106
+ urls_to_download = {
107
+ "train": f"{_URL}{_TRAINING_FILE}",
108
+ "dev": f"{_URL}{_DEV_FILE}",
109
+ "test": f"{_URL}{_TEST_FILE}",
110
+ }
111
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
112
+
113
+ return [
114
+ datasets.SplitGenerator(
115
+ name=datasets.Split.TRAIN,
116
+ gen_kwargs={"filepath": downloaded_files["train"], "split": "train"},
117
+ ),
118
+ datasets.SplitGenerator(
119
+ name=datasets.Split.VALIDATION,
120
+ gen_kwargs={"filepath": downloaded_files["dev"], "split": "validation"},
121
+ ),
122
+ datasets.SplitGenerator(
123
+ name=datasets.Split.TEST,
124
+ gen_kwargs={"filepath": downloaded_files["test"], "split": "test"},
125
+ ),
126
+ ]
127
+
128
+ def _generate_examples(self, filepath, split):
129
+ """ Yields examples. """
130
+
131
+ logging.info("⏳ Generating examples from = %s", filepath)
132
+
133
+ with open(filepath, encoding="utf-8") as f:
134
+
135
+ guid = 0
136
+ tokens = []
137
+ ner_tags = []
138
+
139
+ for line in f:
140
+ if line.startswith("-DOCSTART-") or line == "" or line == "\n":
141
+ if tokens:
142
+ yield guid, {
143
+ "id": str(guid),
144
+ "tokens": tokens,
145
+ "ner_tags": ner_tags,
146
+ }
147
+ guid += 1
148
+ tokens = []
149
+ ner_tags = []
150
+ else:
151
+ splits = line.split(" ")
152
+ tokens.append(splits[0])
153
+ ner_tags.append(splits[1].rstrip())
154
+
155
+ # last example
156
+ yield guid, {
157
+ "id": str(guid),
158
+ "tokens": tokens,
159
+ "ner_tags": ner_tags,
160
+ }