Datasets:
rcds
/

ArXiv:
Tags:
legal
License:
File size: 4,873 Bytes
fa8151c
 
c81fb4f
dcce8be
 
 
 
 
 
 
c424516
5d45bd1
c424516
 
 
6a6dac9
 
 
c424516
 
 
6a6dac9
 
 
 
 
 
 
 
 
 
 
c424516
 
 
 
 
dcce8be
 
 
 
 
 
 
 
 
 
 
 
 
c424516
 
6a6dac9
b952da6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c424516
6a6dac9
 
 
 
 
 
 
 
 
 
c424516
 
 
6a6dac9
c424516
dcce8be
c424516
b952da6
c424516
c2d3ed1
 
 
b952da6
c2d3ed1
 
 
 
 
 
 
 
b952da6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---
license: cc-by-nd-4.0
viewer: true
task_categories:
- token-classification
tags:
- legal
pretty_name: Multilingual Negation Scope Resolution
size_categories:
- 1K<n<10K
---
# Dataset Card for MultiLegalNeg

### Dataset Summary

This dataset consists of German, French, and Italian court documents annotated for negation cues and negation scopes. It also includes a reformated version of ConanDoyle-neg ([
Morante and Blanco. 2012](https://aclanthology.org/S12-1035/)), SFU Review ([Konstantinova et al. 2012](http://www.lrec-conf.org/proceedings/lrec2012/pdf/533_Paper.pdf)), BioScope ([Szarvas et al. 2008](https://aclanthology.org/W08-0606/)) and Dalloux ([Dalloux et al. 2020](https://clementdalloux.fr/?page_id=28)).

### Languages


| Language             | Subset          |  Number of sentences | Negated sentences |
|----------------------|-----------------|----------------------|-------------------|
| French               | **fr**          |       1059           |        382        |
| Italian              | **it**          |       1001           |        418        |
| German(Germany)      | **de(DE)**      |       1068           |        1098       |      
| German (Switzerland) | **de(CH)**      |       206            |        208        |
| English              | **SFU Review**  |       17672          |        3528       |
| English              | **BioScope**    |       14700          |        2095       |
| English              | **ConanDoyle-neg**|     5714           |        5714       |
| French               | **Dalloux**     |       11032          |        1817       |


## Dataset Structure

### Data Fields

- text (string): full sentence
- spans (list): list of annotated cues and scopes
  - start (int): offset of the beginning of the annotation
  - end (int): offset of the end of the annotation
  - token_start(int): id of the first token in the annotation
  - token_end(int): id of the last token in the annotation
  - label (string): CUE or SCOPE
- tokens (list): list of tokens in the sentence
  - text (string): token text
  - start (int): offset of the first character
  - end (int): offset of the last character
  - id (int): token id
  - ws (boolean): indicates if the token is followed by a white space

### Data Splits
For each subset a train (70%), test (20%), and validation (10%) split is available.

#### How to use this dataset

To load all data use ```'all_all'```, or specify which dataset to load as the second argument. The available configurations are 
```'de', 'fr', 'it', 'swiss', 'fr_dalloux', 'fr_all', 'en_bioscope', 'en_sherlock', 'en_sfu', 'en_all', 'all_all'```

```
from datasets import load_dataset

dataset = load_dataset("rcds/MultiLegalNeg", "all_all")

dataset
```
```
DatasetDict({
    train: Dataset({
        features: ['text', 'spans', 'tokens'],
        num_rows: 26440
    })
    test: Dataset({
        features: ['text', 'spans', 'tokens'],
        num_rows: 7593
    })
    validation: Dataset({
        features: ['text', 'spans', 'tokens'],
        num_rows: 4053
    })
})
```

### Source Data
| Subset            |  Source | 
|-------------------|----------------------|
| **fr**            | [Niklaus et al. 2021](https://aclanthology.org/2021.nllp-1.3/), [Niklaus et al. 2023](https://arxiv.org/abs/2306.02069)       |
| **it**            | [Niklaus et al. 2021](https://aclanthology.org/2021.nllp-1.3/), [Niklaus et al. 2023](https://arxiv.org/abs/2306.02069)        | 
| **de(DE)**        | [Glaser et al. 2021](https://www.scitepress.org/Link.aspx?doi=10.5220/0010246308120821)        |    
| **de(CH)**        | [Niklaus et al. 2021](https://aclanthology.org/2021.nllp-1.3/)         |
| **SFU Review**    | [Konstantinova et al. 2012](http://www.lrec-conf.org/proceedings/lrec2012/pdf/533_Paper.pdf)          |
| **BioScope**      | [Szarvas et al. 2008](https://aclanthology.org/W08-0606/)          |
| **ConanDoyle-neg**| [Morante and Blanco. 2012](https://aclanthology.org/S12-1035/)           |
| **Dalloux**       | [Dalloux et al. 2020](https://clementdalloux.fr/?page_id=28)          |


### Annotations
The data is annotated for negation cues and their scopes. Annotation guidelines are available [here](https://github.com/RamonaChristen/Multilingual_Negation_Scope_Resolution_on_Legal_Data/blob/main/Annotation_Guidelines.pdf)
#### Annotation process
Each language was annotated by one native speaking annotator and follows strict annotation guidelines


### Citation Information

Please cite the following preprint:

```
@misc{christen2023resolving,
      title={Resolving Legalese: A Multilingual Exploration of Negation Scope Resolution in Legal Documents}, 
      author={Ramona Christen and Anastassia Shaitarova and Matthias Stürmer and Joel Niklaus},
      year={2023},
      eprint={2309.08695},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```