File size: 5,057 Bytes
2c6adeb
 
5953ef9
 
 
a72366a
5953ef9
2c6adeb
 
 
5953ef9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
---
title: Mirror
emoji: πŸͺž
colorFrom: blue
colorTo: yellow
sdk: docker
pinned: true
license: apache-2.0
---

<div align="center">
  <h1>πŸͺž Mirror: A Universal Framework for Various Information Extraction Tasks</h1>
  <img src="figs/mirror-frontpage.png" width="300" alt="Magic mirror"><br>
  <i>Image generated by DALLE 3</i><br>
  <!-- <img src="figs/mirror-framework.png" alt="Mirror Framework"> -->
  <a href="https://arxiv.org/abs/2311.05419" target="_blank">[Paper]</a> | <a href="https://huggingface.co/spaces/Spico/Mirror" target="_blank">[Demo]</a><br>
  πŸ“ƒ Our paper has been accepted to EMNLP23 main conference, <a href="http://arxiv.org/abs/2311.05419" target="_blank">check it out</a>!<br>
</div>

<hr>

😎: This is the official implementation of [πŸͺžMirror](https://arxiv.org/abs/2311.05419) which supports *almost* all the Information Extraction tasks.

The name, Mirror, comes from the classical story *Snow White and the Seven Dwarfs*, where a magic mirror knows everything in the world.
We aim to build such a powerful tool for the IE community.

## πŸ”₯ Supported Tasks

1. Named Entity Recognition
2. Entity Relationship Extraction (Triplet Extraction)
3. Event Extraction
4. Aspect-based Sentiment Analysis
5. Multi-span Extraction (e.g. Discontinuous NER)
6. N-ary Extraction (e.g. Hyper Relation Extraction)
7. Extractive Machine Reading Comprehension (MRC) and Question Answering
8. Classification & Multi-choice MRC

![System Comparison](figs/sys-comparison.png)

## 🌴 Dependencies

Python>=3.10

```bash
pip install -r requirements.txt
```

## πŸš€ QuickStart

### Pretrained Model Weights & Datasets

Download the pretrained model weights & datasets from [[OSF]](https://osf.io/kwsm4/?view_only=5b66734d88cf456b93f17b6bac8a44fb) .

No worries, it's an anonymous link just for double blind peer reviewing.

### Pretraining

1. Download and unzip the pretraining corpus into `resources/Mirror/v1.4_sampled_v3/merged/all_excluded`
2. Start to run

```bash
CUDA_VISIBLE_DEVICES=0 rex train -m src.task -dc conf/Pretrain_excluded.yaml
```

### Fine-tuning

⚠️ Due to data license constraints, some datasets are unavailable to provide directly (e.g. ACE04, ACE05).

1. Download and unzip the pretraining corpus into `resources/Mirror/v1.4_sampled_v3/merged/all_excluded`
2. Download and unzip the fine-tuning datasets into `resources/Mirror/uie/`
3. Start to fine-tuning

```bash
# UIE tasks
CUDA_VISIBLE_DEVICES=0 bash scripts/single_task_wPTAllExcluded_wInstruction/run1.sh
CUDA_VISIBLE_DEVICES=1 bash scripts/single_task_wPTAllExcluded_wInstruction/run2.sh
CUDA_VISIBLE_DEVICES=2 bash scripts/single_task_wPTAllExcluded_wInstruction/run3.sh
CUDA_VISIBLE_DEVICES=3 bash scripts/single_task_wPTAllExcluded_wInstruction/run4.sh
# Multi-span and N-ary extraction
CUDA_VISIBLE_DEVICES=4 bash scripts/single_task_wPTAllExcluded_wInstruction/run_new_tasks.sh
# GLUE datasets
CUDA_VISIBLE_DEVICES=5 bash scripts/single_task_wPTAllExcluded_wInstruction/glue.sh
```

### Analysis Experiments

- Few-shot experiments : `scripts/run_fewshot.sh`. Collecting results: `python mirror_fewshot_outputs/get_avg_results.py`
- Mirror w/ PT w/o Inst. : `scripts/single_task_wPTAllExcluded_woInstruction`
- Mirror w/o PT w/ Inst. : `scripts/single_task_wo_pretrain`
- Mirror w/o PT w/o Inst. : `scripts/single_task_wo_pretrain_wo_instruction`

### Evaluation

1. Change `task_dir` and `data_pairs` you want to evaluate. The default setting is to get results of Mirror<sub>direct</sub> on all downstream tasks.
2. `CUDA_VISIBLE_DEVICES=0 python -m src.eval`

### Demo

1. Download and unzip the pretrained task dump into `mirror_outputs/Mirror_Pretrain_AllExcluded_2`
2. Try our demo:

```bash
CUDA_VISIBLE_DEVICES=0 python -m src.app.api_backend
```

![Demo](figs/mirror-demo.gif)

## πŸ“‹ Citation

```bibtex
@misc{zhu_mirror_2023,
  shorttitle = {Mirror},
  title = {Mirror: A Universal Framework for Various Information Extraction Tasks},
  author = {Zhu, Tong and Ren, Junfei and Yu, Zijian and Wu, Mengsong and Zhang, Guoliang and Qu, Xiaoye and Chen, Wenliang and Wang, Zhefeng and Huai, Baoxing and Zhang, Min},
  url = {http://arxiv.org/abs/2311.05419},
  doi = {10.48550/arXiv.2311.05419},
  urldate = {2023-11-10},
  publisher = {arXiv},
  month = nov,
  year = {2023},
  note = {arXiv:2311.05419 [cs]},
  keywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language},
}
```

## πŸ›£οΈ Roadmap

- [ ] Convert current model into Huggingface version, supporting loading from `transformers` like other newly released LLMs.
- [ ] Remove `Background` area, merge `TL`, `TP` into a single `T` token
- [ ] Add more task data: keyword extraction, coreference resolution, FrameNet, WikiNER, T-Rex relation extraction dataset, etc.
- [ ] Pre-train on all the data (including benchmarks) to build a nice out-of-the-box toolkit for universal IE.

## πŸ’Œ Yours sincerely

This project is licensed under Apache-2.0.
We hope you enjoy it ~

<hr>
<div align="center">
  <p>Mirror Team w/ πŸ’–</p>
</div>