Datasets:

Languages:
Romanian
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
other
Source Datasets:
original
Tags:
legal
License:
joelniklaus commited on
Commit
8fca89d
1 Parent(s): 09605cb

changed notation scheme to IOB

Browse files
Files changed (5) hide show
  1. README.md +3 -1
  2. convert_to_hf_dataset.py +3 -5
  3. test.jsonl +2 -2
  4. train.jsonl +2 -2
  5. validation.jsonl +2 -2
README.md CHANGED
@@ -3,7 +3,7 @@ annotations_creators:
3
  - other
4
  language_creators:
5
  - found
6
- languages:
7
  - ro
8
  license:
9
  - cc-by-nc-nd-4.0
@@ -89,6 +89,8 @@ The files contain the following data fields
89
  - `TIME`: Time reference
90
  - `O`: No entity annotation present
91
 
 
 
92
  ### Data Splits
93
 
94
  Splits created by Joel Niklaus.
 
3
  - other
4
  language_creators:
5
  - found
6
+ language:
7
  - ro
8
  license:
9
  - cc-by-nc-nd-4.0
 
89
  - `TIME`: Time reference
90
  - `O`: No entity annotation present
91
 
92
+ The final tagset (in IOB notation) is the following: `['O', 'B-TIME', 'I-TIME', 'B-LEGAL', 'I-LEGAL', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-PER', 'I-PER']`
93
+
94
  ### Data Splits
95
 
96
  Splits created by Joel Niklaus.
convert_to_hf_dataset.py CHANGED
@@ -1,5 +1,4 @@
1
  import os
2
- import re
3
  from glob import glob
4
  from pathlib import Path
5
 
@@ -17,9 +16,6 @@ base_path = Path("legalnero-data")
17
  tokenizer = Romanian().tokenizer
18
 
19
 
20
- # A and D are different government gazettes
21
- # A is the general one, publishing standard legislation, and D is meant for legislation on urban planning and such things
22
-
23
  def process_document(ann_file: str, text_file: Path, metadata: dict, tokenizer) -> List[dict]:
24
  """Processes one document (.ann file and .txt file) and returns a list of annotated sentences"""
25
  # read the ann file into a df
@@ -55,7 +51,7 @@ def process_document(ann_file: str, text_file: Path, metadata: dict, tokenizer)
55
  print(f"Could not find entity `{row['entity_text']}` in sentence `{sentence}`")
56
 
57
  ann_sent["words"] = [str(tok) for tok in doc]
58
- ann_sent["ner"] = [tok.ent_type_ if tok.ent_type_ else "O" for tok in doc]
59
 
60
  annotated_sentences.append(ann_sent)
61
  if not_found_entities > 0:
@@ -89,6 +85,8 @@ df.ner = df.ner.apply(lambda x: x[:-1])
89
  # remove rows with containing only one word
90
  df = df[df.words.map(len) > 1]
91
 
 
 
92
  # split by file_name
93
  num_fn = len(file_names)
94
  train_fn, validation_fn, test_fn = np.split(np.array(file_names), [int(.8 * num_fn), int(.9 * num_fn)])
 
1
  import os
 
2
  from glob import glob
3
  from pathlib import Path
4
 
 
16
  tokenizer = Romanian().tokenizer
17
 
18
 
 
 
 
19
  def process_document(ann_file: str, text_file: Path, metadata: dict, tokenizer) -> List[dict]:
20
  """Processes one document (.ann file and .txt file) and returns a list of annotated sentences"""
21
  # read the ann file into a df
 
51
  print(f"Could not find entity `{row['entity_text']}` in sentence `{sentence}`")
52
 
53
  ann_sent["words"] = [str(tok) for tok in doc]
54
+ ann_sent["ner"] = [tok.ent_iob_ + "-" + tok.ent_type_ if tok.ent_type_ else "O" for tok in doc]
55
 
56
  annotated_sentences.append(ann_sent)
57
  if not_found_entities > 0:
 
85
  # remove rows with containing only one word
86
  df = df[df.words.map(len) > 1]
87
 
88
+ print(f"The final tagset (in IOB notation) is the following: `{list(df.ner.explode().unique())}`")
89
+
90
  # split by file_name
91
  num_fn = len(file_names)
92
  train_fn, validation_fn, test_fn = np.split(np.array(file_names), [int(.8 * num_fn), int(.9 * num_fn)])
test.jsonl CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8009d0a864d6ffe4174cd876afe7bf5f5c01cfb31d1165b76a9f5d2eecd23b85
3
- size 409786
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c73ba1c9e453754d9efb64a6b94db9ffd0997a36dd4d16a7fcccd449c266513
3
+ size 414576
train.jsonl CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0987ce4839662c5c4144aec5f9dbcb3669abb7143d45718e47af486b64cf0829
3
- size 3266615
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59f76a7b1c48596c8d3a6b0c19b7c4ff76ca991df091e0c3cb3447ee57386aa7
3
+ size 3301849
validation.jsonl CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:84b8948d38ef736d39849811dc7648b1086903bf7efabb86c6ac266175575d57
3
- size 421295
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b81f444ea43588cb3a62968df0d6708aa131754fe7dffee1ba94b5e0757a307
3
+ size 426257