Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Pre-trained evaluator in EMNLP 2022 paper
|
2 |
+
|
3 |
+
*[Towards a Unified Multi-Dimensional Evaluator for Text Generation](https://arxiv.org/abs/2210.07197)*
|
4 |
+
|
5 |
+
## Introduction
|
6 |
+
|
7 |
+
**Multi-dimensional evaluation** is the dominant paradigm for human evaluation in Natural Language Generation (NLG), i.e., evaluating the generated text from multiple explainable dimensions, such as coherence and fluency.
|
8 |
+
|
9 |
+
However, automatic evaluation in NLG is still dominated by similarity-based metrics (e.g., ROUGE, BLEU), but they are not sufficient to portray the difference between the advanced generation models.
|
10 |
+
|
11 |
+
Therefore, we propose **UniEval** to bridge this gap so that a more comprehensive and fine-grained evaluation of NLG systems can be achieved.
|
12 |
+
|
13 |
+
## Pre-trained Evaluator
|
14 |
+
|
15 |
+
**unieval-dialog** is the pre-trained evaluator for the dialogue response generation task. It can evaluate the model output from five dimensions:
|
16 |
+
|
17 |
+
- *naturalness*
|
18 |
+
- *coherence*
|
19 |
+
- *engagingness*
|
20 |
+
- *groundedness*
|
21 |
+
- *understandability*
|
22 |
+
|
23 |
+
|
24 |
+
## Usage
|
25 |
+
|
26 |
+
Please refer to [our GitHub repository](https://github.com/maszhongming/UniEval).
|