Datasets:

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
pandas
File size: 5,423 Bytes
090a9ab
 
2f73bef
090a9ab
 
 
 
 
 
 
 
 
dc62ad4
090a9ab
 
dc62ad4
090a9ab
 
dc62ad4
090a9ab
dc62ad4
 
2f73bef
 
 
 
 
 
 
 
 
 
c347769
2f73bef
 
c347769
2f73bef
 
c347769
2f73bef
c347769
 
090a9ab
 
 
 
 
 
 
 
 
2f73bef
 
 
 
 
 
 
 
090a9ab
5f71af1
 
 
 
 
72c11a8
 
 
 
5f71af1
 
 
 
72c11a8
5f71af1
 
8fae43d
5f71af1
343e627
641aff7
343e627
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
df70bdc
343e627
9a26e21
 
 
343e627
 
 
 
 
 
9a26e21
 
 
df70bdc
9a26e21
 
f7820ec
343e627
6e9283d
 
01a6fa3
 
 
 
 
 
9940cf4
 
b67e5ae
 
 
 
f7820ec
b67e5ae
017bd1a
b67e5ae
f7820ec
b67e5ae
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
---
dataset_info:
- config_name: ADG
  features:
  - name: text
    dtype: string
  - name: target_entity
    dtype: string
  - name: label
    dtype: int64
  splits:
  - name: train
    num_bytes: 729565
    num_examples: 3201
  - name: validation
    num_bytes: 168501
    num_examples: 759
  - name: test
    num_bytes: 114693
    num_examples: 470
  download_size: 346052
  dataset_size: 1012759
- config_name: WN
  features:
  - name: text
    dtype: string
  - name: target_entity
    dtype: string
  - name: label
    dtype: int64
  splits:
  - name: train
    num_bytes: 3010007
    num_examples: 14331
  - name: validation
    num_bytes: 665886
    num_examples: 3320
  - name: test
    num_bytes: 862510
    num_examples: 3463
  download_size: 1253281
  dataset_size: 4538403
configs:
- config_name: ADG
  data_files:
  - split: train
    path: ADG/train-*
  - split: validation
    path: ADG/validation-*
  - split: test
    path: ADG/test-*
- config_name: WN
  data_files:
  - split: train
    path: WN/train-*
  - split: validation
    path: WN/validation-*
  - split: test
    path: WN/test-*
---

# Named-Entities Recognition on Multi-Domain Documents (NERMUD)

Original paper: https://iris.unitn.it/retrieve/d833b9e4-e997-4ee4-b6aa-f5144a85f708/paper42.pdf

NERMuD is a task presented at EVALITA 2023 consisting in the extraction and classification of named-entities in a document, such as persons, organizations, and locations.

This dataset comes as a word level classification setting, we decided to reframe the task to be prompted to a generative LLMs as a multiclass classification without the extractive part.
The prompt will be a text and one of the possible named entities contained in it, then the model is asked to return the correct class of the Named entity (Location, Organization, Person). 

To do this, **we generated a different sample for each Named-Entity** in the original dataset, the data come in an Inside-Outside-Begining (IOB) tagging scheme:

| Original format | Output Format |
| ------- | ------ |
| "L'":O, "astronauta":O, "Umberto":B-PER, "Guidoni":I-PER, "dell'":O "Agenzia":B-ORG, "Spaziale":I-ORG | ("L'astronauta Umberto Guidoni dell'Agenzia Spaziale", "Umberto Guidoni", PERSONA), ("L'astronauta Umberto Guidoni dell'Agenzia Spaziale", "Agenzia Spaziale", ORGANIZZAZIONE) |

Nermud come with three different Domains: ADG (Alcide De Gasperi writings), FIC (Fiction Books), and WN (Wikinews). After our reframing, with additional cleaning strategies (e.g. remove duplicate, ambiguous samples, ...) we decided to mantain only AGD and WN domains, since FIC ended up to be high unbalanced on the test set.

## Example

Here you can see the structure of the single sample in the present dataset.

```json
{
  "text": string, # text of the sentence
  "target_entity": string, # text of the entity to classify
  "label": int, # 0: Luogo, 1: Organizzazione, 2: Persona
}
```

## Statitics

| NERMUD WN | Luogo | Persona | Organizzazione |
| :--------: | :----: | :----: | :----: |
| Training | 4661 | 5291 | 4379 |
| Validation | 1217 | 1056 | 1047 |
| Test | 859 | 1373 | 1231 |

| NERMUD AGD | Luogo | Persona | Organizzazione |
| :--------: | :----: | :----: | :----: |
| Training | 891 | 839 | 1471 |
| Validation | 220 | 198 | 341 |
| Test | 97 | 162 | 221 |


## Proposed Prompts

Here we will describe the prompt given to the model over which we will compute the perplexity score, as model's answer we will chose the prompt with lower perplexity.
Moreover, for each subtask, we define a description that is prepended to the prompts, needed by the model to understand the task.

Description of the task: "Data una frase e un'entità, indica se tale entità rappresenta un luogo, un'organizzazione o una persona.\n\n"


### Cloze style:

Label (**Luogo**): "Data la frase: '{{text}}'\nL'entità {{target_entity}} è un luogo"

Label (**Persona**): "Data la frase: '{{text}}'\nL'entità {{target_entity}} è una persona"

Label (**Organizzazione**): "Data la frase: '{{text}}'\nL'entità {{target_entity}} è un'organizzazione"

### MCQA style:

```txt
Data la frase: \"{{text}}\"\nDomanda: A quale tipologia di entità appartiene \"{{target_entity}}\" nella frase precedente?\nA. Luogo\nB. Organizzazione\nC. Persona\nRisposta:
```

## Results

The following results are given by the Cloze-style prompting over some english and italian-adapted LLMs.

| NERMUD (AVG) | ACCURACY (5-shots) |
| :-----: | :--: |
| Gemma-2B | 55.25 |
| QWEN2-1.5B | 65.82 |
| Mistral-7B | 83.42 |
| ZEFIRO | 83.24 |
| Llama-3-8B | 85.64 |
| Llama-3-8B-IT | 89.5 |
| ANITA | 88.43 |

## Acknowledge

We would like to thank the authors of this resource for publicly releasing such an intriguing benchmark.

Further, We want to thanks the students of [MNLP-2024 course](https://naviglinlp.blogspot.com/), where with their first homework tried different interesting prompting strategies, and different reframing strategies that able us to generate this resource.

The original dataset is freely available for download [link](https://github.com/dhfbk/KIND/tree/main/evalita-2023)

## License

All the texts used are publicly available, under a license that permits both research and commercial use. In particular, the texts used for NERMuD are taken from:

Wikinews (WN) as a source of news texts belonging to the last decades;

Writings and speeches from the Italian politician Alcide De Gasperi (ADG).