Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
json
Sub-tasks:
named-entity-recognition
Languages:
Greek
Size:
10K - 100K
Tags:
legal
License:
Commit
•
89acdfd
1
Parent(s):
2b39f22
Update parquet files
Browse files
.gitattributes
DELETED
@@ -1,40 +0,0 @@
|
|
1 |
-
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
-
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
-
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
-
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
6 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
7 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
8 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
9 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
19 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
20 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
23 |
-
*.wasm filter=lfs diff=lfs merge=lfs -text
|
24 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
-
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
28 |
-
# Audio files - uncompressed
|
29 |
-
*.pcm filter=lfs diff=lfs merge=lfs -text
|
30 |
-
*.sam filter=lfs diff=lfs merge=lfs -text
|
31 |
-
*.raw filter=lfs diff=lfs merge=lfs -text
|
32 |
-
# Audio files - compressed
|
33 |
-
*.aac filter=lfs diff=lfs merge=lfs -text
|
34 |
-
*.flac filter=lfs diff=lfs merge=lfs -text
|
35 |
-
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
36 |
-
*.ogg filter=lfs diff=lfs merge=lfs -text
|
37 |
-
*.wav filter=lfs diff=lfs merge=lfs -text
|
38 |
-
test.jsonl filter=lfs diff=lfs merge=lfs -text
|
39 |
-
train.jsonl filter=lfs diff=lfs merge=lfs -text
|
40 |
-
validation.jsonl filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
DELETED
@@ -1,211 +0,0 @@
|
|
1 |
-
---
|
2 |
-
annotations_creators:
|
3 |
-
- other
|
4 |
-
language_creators:
|
5 |
-
- found
|
6 |
-
language:
|
7 |
-
- el
|
8 |
-
license:
|
9 |
-
- cc-by-nc-sa-4.0
|
10 |
-
multilinguality:
|
11 |
-
- monolingual
|
12 |
-
paperswithcode_id: null
|
13 |
-
pretty_name: Greek Legal Named Entity Recognition
|
14 |
-
size_categories:
|
15 |
-
- 10K<n<100K
|
16 |
-
source_datasets:
|
17 |
-
- original
|
18 |
-
task_categories:
|
19 |
-
- token-classification
|
20 |
-
task_ids:
|
21 |
-
- named-entity-recognition
|
22 |
-
---
|
23 |
-
|
24 |
-
# Dataset Card for Greek Legal Named Entity Recognition
|
25 |
-
|
26 |
-
## Table of Contents
|
27 |
-
- [Table of Contents](#table-of-contents)
|
28 |
-
- [Dataset Description](#dataset-description)
|
29 |
-
- [Dataset Summary](#dataset-summary)
|
30 |
-
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
31 |
-
- [Languages](#languages)
|
32 |
-
- [Dataset Structure](#dataset-structure)
|
33 |
-
- [Data Instances](#data-instances)
|
34 |
-
- [Data Fields](#data-fields)
|
35 |
-
- [Data Splits](#data-splits)
|
36 |
-
- [Dataset Creation](#dataset-creation)
|
37 |
-
- [Curation Rationale](#curation-rationale)
|
38 |
-
- [Source Data](#source-data)
|
39 |
-
- [Annotations](#annotations)
|
40 |
-
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
41 |
-
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
42 |
-
- [Social Impact of Dataset](#social-impact-of-dataset)
|
43 |
-
- [Discussion of Biases](#discussion-of-biases)
|
44 |
-
- [Other Known Limitations](#other-known-limitations)
|
45 |
-
- [Additional Information](#additional-information)
|
46 |
-
- [Dataset Curators](#dataset-curators)
|
47 |
-
- [Licensing Information](#licensing-information)
|
48 |
-
- [Citation Information](#citation-information)
|
49 |
-
- [Contributions](#contributions)
|
50 |
-
|
51 |
-
## Dataset Description
|
52 |
-
|
53 |
-
- **Homepage:** http://legislation.di.uoa.gr/publications?language=en
|
54 |
-
- **Repository:**
|
55 |
-
- **Paper:** Angelidis, I., Chalkidis, I., & Koubarakis, M. (2018). Named Entity Recognition, Linking and Generation for Greek Legislation. JURIX.
|
56 |
-
- **Leaderboard:**
|
57 |
-
- **Point of Contact:** [Ilias Chalkidis](mailto:ilias.chalkidis@di.ku.dk); [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
|
58 |
-
|
59 |
-
### Dataset Summary
|
60 |
-
|
61 |
-
This dataset contains an annotated corpus for named entity recognition in Greek legislations. It is the first of its kind for the Greek language in such an extended form and one of the few that examines legal text in a full spectrum entity recognition.
|
62 |
-
|
63 |
-
### Supported Tasks and Leaderboards
|
64 |
-
|
65 |
-
The dataset supports the task of named entity recognition.
|
66 |
-
|
67 |
-
### Languages
|
68 |
-
|
69 |
-
The language in the dataset is Greek as it used in the Greek Government Gazette.
|
70 |
-
|
71 |
-
## Dataset Structure
|
72 |
-
|
73 |
-
### Data Instances
|
74 |
-
|
75 |
-
The file format is jsonl and three data splits are present (train, validation and test).
|
76 |
-
|
77 |
-
### Data Fields
|
78 |
-
|
79 |
-
The files contain the following data fields
|
80 |
-
- `date`: The date when the document was published.
|
81 |
-
- `gazette`: The government gazette of the document. Either `A` or `D`
|
82 |
-
- `A` is the general one, publishing standard legislation
|
83 |
-
- `D` is meant for legislation on urban planning and such things
|
84 |
-
- `words`: The list of tokens obtained by applying the spacy (v 3.3.1) Greek tokenizer on the sentences. For more information see `convert_to_hf_dataset.py`.
|
85 |
-
- `ner`: The list of ner tags. The list of labels for the named entities that are covered by the dataset are the following:
|
86 |
-
- `FACILITY`: Facilities, such as police stations, departments etc.
|
87 |
-
- `GPE`: Geopolitical Entity; any reference to a geopolitical entity (e.g., country, city, Greek administrative unit, etc.)
|
88 |
-
- `LEG-REFS`: Legislation Reference; any reference to Greek or European legislation (e.g., Presidential Decrees, Laws, Decisions, EU Regulations and Directives, etc.)
|
89 |
-
- `LOCATION-NAT`: Well defined natural location, such as rivers, mountains, lakes etc.
|
90 |
-
- `LOCATION-UNK`: Poorly defined locations such "End of road X" or other locations that are not "official".
|
91 |
-
- `ORG`: Organization; any reference to a public or private organization, such as: international organizations (e.g., European Union, United Nations, etc.), Greek public organizations (e.g., Social Insurance Institution) or private ones (e.g., companies, NGOs, etc.).
|
92 |
-
- `PERSON`: Any formal name of a person mentioned in the text (e.g., Greek government members, public administration officials, etc.).
|
93 |
-
- `PUBLIC-DOCS`: Public Document Reference; any reference to documents or decisions that have been published by a public institution (organization) that are not considered a primary source of legislation (e.g., local decisions, announcements, memorandums, directives).
|
94 |
-
- `O`: No entity annotation present
|
95 |
-
|
96 |
-
The final tagset (in IOB notation) is the following: `['O', 'B-ORG', 'I-ORG', 'B-GPE', 'I-GPE', 'B-LEG-REFS', 'I-LEG-REFS', 'B-PUBLIC-DOCS', 'I-PUBLIC-DOCS', 'B-PERSON', 'I-PERSON', 'B-FACILITY', 'I-FACILITY', 'B-LOCATION-UNK', 'I-LOCATION-UNK', 'B-LOCATION-NAT', 'I-LOCATION-NAT']`
|
97 |
-
|
98 |
-
### Data Splits
|
99 |
-
|
100 |
-
The dataset has three splits: *train*, *validation* and *test*.
|
101 |
-
|
102 |
-
Split across the documents:
|
103 |
-
|
104 |
-
| split | number of documents |
|
105 |
-
|:---------------|--------------------:|
|
106 |
-
| train | 23723 |
|
107 |
-
| validation | 5478 |
|
108 |
-
| test | 5084 |
|
109 |
-
|
110 |
-
Split across NER labels
|
111 |
-
|
112 |
-
| NER label + split | number of instances |
|
113 |
-
|:-----------------------------------------------|----------------------:|
|
114 |
-
| ('FACILITY', 'test') | 142 |
|
115 |
-
| ('FACILITY', 'train') | 1224 |
|
116 |
-
| ('FACILITY', 'validation') | 60 |
|
117 |
-
| ('GPE', 'test') | 1083 |
|
118 |
-
| ('GPE', 'train') | 5400 |
|
119 |
-
| ('GPE', 'validation') | 1214 |
|
120 |
-
| ('LEG-REFS', 'test') | 1331 |
|
121 |
-
| ('LEG-REFS', 'train') | 5159 |
|
122 |
-
| ('LEG-REFS', 'validation') | 1382 |
|
123 |
-
| ('LOCATION-NAT', 'test') | 26 |
|
124 |
-
| ('LOCATION-NAT', 'train') | 145 |
|
125 |
-
| ('LOCATION-NAT', 'validation') | 2 |
|
126 |
-
| ('LOCATION-UNK', 'test') | 205 |
|
127 |
-
| ('LOCATION-UNK', 'train') | 1316 |
|
128 |
-
| ('LOCATION-UNK', 'validation') | 283 |
|
129 |
-
| ('ORG', 'test') | 1354 |
|
130 |
-
| ('ORG', 'train') | 5906 |
|
131 |
-
| ('ORG', 'validation') | 1506 |
|
132 |
-
| ('PERSON', 'test') | 491 |
|
133 |
-
| ('PERSON', 'train') | 1921 |
|
134 |
-
| ('PERSON', 'validation') | 475 |
|
135 |
-
| ('PUBLIC-DOCS', 'test') | 452 |
|
136 |
-
| ('PUBLIC-DOCS', 'train') | 2652 |
|
137 |
-
| ('PUBLIC-DOCS', 'validation') | 556 |
|
138 |
-
|
139 |
-
|
140 |
-
|
141 |
-
## Dataset Creation
|
142 |
-
|
143 |
-
### Curation Rationale
|
144 |
-
|
145 |
-
Creating a big dataset for Greek named entity recognition and entity linking.
|
146 |
-
|
147 |
-
### Source Data
|
148 |
-
|
149 |
-
#### Initial Data Collection and Normalization
|
150 |
-
|
151 |
-
[More Information Needed]
|
152 |
-
|
153 |
-
#### Who are the source language producers?
|
154 |
-
|
155 |
-
Greek Government Gazette
|
156 |
-
|
157 |
-
### Annotations
|
158 |
-
|
159 |
-
#### Annotation process
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Who are the annotators?
|
164 |
-
|
165 |
-
According to (Angelidis et al., 2018) the authors of the paper annotated the data: *"Our group annotated all of the above documents for the 6 entity types that we examine."*
|
166 |
-
|
167 |
-
### Personal and Sensitive Information
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Considerations for Using the Data
|
172 |
-
|
173 |
-
### Social Impact of Dataset
|
174 |
-
|
175 |
-
[More Information Needed]
|
176 |
-
|
177 |
-
### Discussion of Biases
|
178 |
-
|
179 |
-
[More Information Needed]
|
180 |
-
|
181 |
-
### Other Known Limitations
|
182 |
-
|
183 |
-
Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.
|
184 |
-
|
185 |
-
## Additional Information
|
186 |
-
|
187 |
-
### Dataset Curators
|
188 |
-
|
189 |
-
The names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*.
|
190 |
-
Additional changes were made by Joel Niklaus ([Email](mailto:joel.niklaus.2@bfh.ch); [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](mailto:veton.matoshi@bfh.ch); [Github](https://github.com/kapllan)).
|
191 |
-
|
192 |
-
|
193 |
-
### Licensing Information
|
194 |
-
|
195 |
-
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/)
|
196 |
-
|
197 |
-
### Citation Information
|
198 |
-
|
199 |
-
```
|
200 |
-
@inproceedings{Angelidis2018NamedER,
|
201 |
-
author = {Angelidis, Iosif and Chalkidis, Ilias and Koubarakis, Manolis},
|
202 |
-
booktitle = {JURIX},
|
203 |
-
keywords = {greek,legal nlp,named entity recognition},
|
204 |
-
title = {{Named Entity Recognition, Linking and Generation for Greek Legislation}},
|
205 |
-
year = {2018}
|
206 |
-
}
|
207 |
-
```
|
208 |
-
|
209 |
-
### Contributions
|
210 |
-
|
211 |
-
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) and [@kapllan](https://github.com/kapllan) for adding this dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
convert_to_hf_dataset.py
DELETED
@@ -1,101 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
from glob import glob
|
3 |
-
from pathlib import Path
|
4 |
-
|
5 |
-
from typing import List
|
6 |
-
|
7 |
-
import pandas as pd
|
8 |
-
|
9 |
-
from spacy.lang.el import Greek
|
10 |
-
|
11 |
-
pd.set_option('display.max_colwidth', None)
|
12 |
-
pd.set_option('display.max_columns', None)
|
13 |
-
|
14 |
-
base_path = Path("DATASETS/ENTITY RECOGNITION")
|
15 |
-
tokenizer = Greek().tokenizer
|
16 |
-
|
17 |
-
|
18 |
-
# A and D are different government gazettes
|
19 |
-
# A is the general one, publishing standard legislation, and D is meant for legislation on urban planning and such things
|
20 |
-
|
21 |
-
def process_document(ann_file: str, text_file: Path, metadata: dict, tokenizer) -> List[dict]:
|
22 |
-
"""Processes one document (.ann file and .txt file) and returns a list of annotated sentences"""
|
23 |
-
# read the ann file into a df
|
24 |
-
ann_df = pd.read_csv(ann_file, sep="\t", header=None, names=["id", "entity_with_span", "entity_text"])
|
25 |
-
sentences = [sent for sent in text_file.read_text().split("\n") if sent] # remove empty sentences
|
26 |
-
|
27 |
-
# split into individual columns
|
28 |
-
ann_df[["entity", "start", "end"]] = ann_df["entity_with_span"].str.split(" ", expand=True)
|
29 |
-
ann_df.start = ann_df.start.astype(int)
|
30 |
-
ann_df.end = ann_df.end.astype(int)
|
31 |
-
|
32 |
-
not_found_entities = 0
|
33 |
-
annotated_sentences = []
|
34 |
-
current_start_index = 0
|
35 |
-
for sentence in sentences:
|
36 |
-
ann_sent = {**metadata}
|
37 |
-
|
38 |
-
doc = tokenizer(sentence)
|
39 |
-
doc_start_index = current_start_index
|
40 |
-
doc_end_index = current_start_index + len(sentence)
|
41 |
-
current_start_index = doc_end_index + 1
|
42 |
-
|
43 |
-
relevant_annotations = ann_df[(ann_df.start >= doc_start_index) & (ann_df.end <= doc_end_index)]
|
44 |
-
for _, row in relevant_annotations.iterrows():
|
45 |
-
sent_start_index = row["start"] - doc_start_index
|
46 |
-
sent_end_index = row["end"] - doc_start_index
|
47 |
-
char_span = doc.char_span(sent_start_index, sent_end_index, label=row["entity"], alignment_mode="expand")
|
48 |
-
# ent_span = Span(doc, char_span.start, char_span.end, row["entity"])
|
49 |
-
if char_span:
|
50 |
-
doc.set_ents([char_span])
|
51 |
-
else:
|
52 |
-
not_found_entities += 1
|
53 |
-
print(f"Could not find entity `{row['entity_text']}` in sentence `{sentence}`")
|
54 |
-
|
55 |
-
ann_sent["words"] = [str(tok) for tok in doc]
|
56 |
-
ann_sent["ner"] = [tok.ent_iob_ + "-" + tok.ent_type_ if tok.ent_type_ else "O" for tok in doc]
|
57 |
-
|
58 |
-
annotated_sentences.append(ann_sent)
|
59 |
-
|
60 |
-
print(f"Did not find entities in {not_found_entities} cases")
|
61 |
-
return annotated_sentences
|
62 |
-
|
63 |
-
|
64 |
-
def read_to_df(split):
|
65 |
-
"""Reads the different documents and saves metadata"""
|
66 |
-
ann_files = glob(str(base_path / split / "ANN" / "*/*/*.ann"))
|
67 |
-
sentences = []
|
68 |
-
for ann_file in ann_files:
|
69 |
-
path = Path(ann_file)
|
70 |
-
year = path.parent.stem
|
71 |
-
file_name = path.stem
|
72 |
-
_, gazette, gazette_number, _, date = tuple(file_name.split(' '))
|
73 |
-
text_file = base_path / split / "TXT" / f"{gazette}/{year}/{file_name}.txt"
|
74 |
-
metadata = {
|
75 |
-
"date": date,
|
76 |
-
"gazette": gazette,
|
77 |
-
# "gazette_number": gazette_number,
|
78 |
-
}
|
79 |
-
sentences.extend(process_document(ann_file, text_file, metadata, tokenizer))
|
80 |
-
return pd.DataFrame(sentences)
|
81 |
-
|
82 |
-
|
83 |
-
splits = ["TRAIN", "VALIDATION", "TEST"]
|
84 |
-
train = read_to_df("TRAIN")
|
85 |
-
validation = read_to_df("VALIDATION")
|
86 |
-
test = read_to_df("TEST")
|
87 |
-
|
88 |
-
df = pd.concat([train, validation, test])
|
89 |
-
print(f"The final tagset (in IOB notation) is the following: `{list(df.ner.explode().unique())}`")
|
90 |
-
|
91 |
-
|
92 |
-
# save splits
|
93 |
-
def save_splits_to_jsonl(config_name):
|
94 |
-
# save to jsonl files for huggingface
|
95 |
-
if config_name: os.makedirs(config_name, exist_ok=True)
|
96 |
-
train.to_json(os.path.join(config_name, "train.jsonl"), lines=True, orient="records", force_ascii=False)
|
97 |
-
validation.to_json(os.path.join(config_name, "validation.jsonl"), lines=True, orient="records", force_ascii=False)
|
98 |
-
test.to_json(os.path.join(config_name, "test.jsonl"), lines=True, orient="records", force_ascii=False)
|
99 |
-
|
100 |
-
|
101 |
-
save_splits_to_jsonl("")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
train.jsonl → joelito--greek_legal_ner/json-test.parquet
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d9a797195b94afb256e22c85b7cd7900e71a47cdb9bab5c568880848330c710a
|
3 |
+
size 429049
|
validation.jsonl → joelito--greek_legal_ner/json-train.parquet
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d4c1e48c1cbca24fa066a853b3a3619186a032444a73f496e33369f79f985714
|
3 |
+
size 1587323
|
test.jsonl → joelito--greek_legal_ner/json-validation.parquet
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f23f9670d1adcbd76badee642035bd61e4d70a4841e1808d94ab63396e7c631a
|
3 |
+
size 444852
|