File size: 10,994 Bytes
e7b9d79
 
 
 
 
 
 
 
 
 
 
 
 
95515ee
 
e7b9d79
ae256ae
95515ee
 
 
 
 
e7b9d79
 
 
95515ee
 
 
e7b9d79
 
 
 
 
 
 
ae256ae
e7b9d79
ae256ae
 
 
 
 
 
95515ee
ae256ae
 
95515ee
ae256ae
95515ee
ae256ae
95515ee
ae256ae
 
95515ee
ae256ae
95515ee
ae256ae
95515ee
e7b9d79
 
 
a58ecb1
 
e7b9d79
 
a58ecb1
 
 
e7b9d79
 
ae256ae
 
e7b9d79
ae256ae
e7b9d79
ae256ae
 
 
 
 
 
e7b9d79
ae256ae
bdfc008
e7b9d79
 
 
 
 
ae256ae
 
 
e7b9d79
 
 
a58ecb1
 
 
e7b9d79
 
ae256ae
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e7b9d79
02e6b9b
e7b9d79
a58ecb1
02e6b9b
 
 
 
 
 
a58ecb1
02e6b9b
 
 
 
 
 
 
 
 
a58ecb1
e7b9d79
 
ae256ae
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e7b9d79
 
4a28bae
 
 
 
 
 
 
e7b9d79
 
 
 
 
a58ecb1
e7b9d79
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
---
inference: false
license: mit
language:
- en
metrics:
- exact_match
- f1
- bertscore
pipeline_tag: text-classification
---
# QA-Evaluation-Metrics

[![PyPI version qa-metrics](https://img.shields.io/pypi/v/qa-metrics.svg)](https://pypi.org/project/qa-metrics/) 
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/17b7vrZqH0Yun2AJaOXydYZxr3cw20Ga6?usp=sharing)

QA-Evaluation-Metrics is a fast and lightweight Python package for evaluating question-answering models and prompting of black-box and open-source large language models. It provides various basic and efficient metrics to assess the performance of QA models. 

### Updates
- Uopdated to version 0.2.8 
  - Supports prompting OPENAI GPT-series models and Claude Series models now. (Assuimg OPENAI version > 1.0)
  - Supports prompting various open source models such as LLaMA-2-70B-chat, LLaVA-1.5 etc by calling API from [deepinfra](https://deepinfra.com/models).


## Installation
* Python version >= 3.6
* openai version >= 1.0


To install the package, run the following command:

```bash
pip install qa-metrics
```

## Usage/Logistics

The python package currently provides six QA evaluation methods. 
- Given a set of gold answers, a candidate answer to be evaluated, and a question (if applicable), the evaluation returns True if the candidate answer matches any one of the gold answer, False otherwise.
- Different evaluation methods have distinct strictness of evaluating the correctness of a candidate answer. Some have higher correlation with human judgments than others.
- Normalized Exact Match and Question/Answer type Evaluation are the most efficient method. They are suitable for short-form QA datasets such as NQ-OPEN, Hotpot QA, TriviaQA, SQuAD, etc.
- Question/Answer Type Evaluation and Transformer Neural evaluations are cost free and suitable for short-form and longer-form QA datasets. They have higher correlation with human judgments than exact match and F1 score when the length of the gold and candidate answers become long.
- Black-box LLM evaluations are closest to human evaluations, and they are not cost-free.

## Normalized Exact Match
#### `em_match`

Returns a boolean indicating whether there are any exact normalized matches between gold and candidate answers.

**Parameters**

- `reference_answer` (list of str): A list of gold (correct) answers to the question.
- `candidate_answer` (str): The answer provided by a candidate that needs to be evaluated.

**Returns**

- `boolean`: A boolean True/False signifying matches between reference or candidate answers.

```python
from qa_metrics.em import em_match

reference_answer = ["The Frog Prince", "The Princess and the Frog"]
candidate_answer = "The movie \"The Princess and the Frog\" is loosely based off the Brother Grimm's \"Iron Henry\""
match_result = em_match(reference_answer, candidate_answer)
print("Exact Match: ", match_result)
'''
Exact Match:  False
'''
```

## F1 Score
#### `f1_score_with_precision_recall`

Calculates F1 score, precision, and recall between a reference and a candidate answer.

**Parameters**

- `reference_answer` (str): A gold (correct) answers to the question.
- `candidate_answer` (str): The answer provided by a candidate that needs to be evaluated.

**Returns**

- `dictionary`: A dictionary containing the F1 score, precision, and recall between a gold and candidate answer.

```python
from qa_metrics.f1 import f1_match,f1_score_with_precision_recall

f1_stats = f1_score_with_precision_recall(reference_answer[0], candidate_answer)
print("F1 stats: ", f1_stats)
'''
F1 stats:  {'f1': 0.25, 'precision': 0.6666666666666666, 'recall': 0.15384615384615385}
'''

match_result = f1_match(reference_answer, candidate_answer, threshold=0.5)
print("F1 Match: ", match_result)
'''
F1 Match:  False
'''
```

## Efficient and Robust Question/Answer Type Evaluation
#### 1. `get_highest_score`

Returns the gold answer and candidate answer pair that has the highest matching score. This function is useful for evaluating the closest match to a given candidate response based on a list of reference answers.

**Parameters**

- `reference_answer` (list of str): A list of gold (correct) answers to the question.
- `candidate_answer` (str): The answer provided by a candidate that needs to be evaluated.
- `question` (str): The question for which the answers are being evaluated.

**Returns**

- `dictionary`: A dictionary containing the gold answer and candidate answer that have the highest matching score.

#### 2. `get_scores`

Returns all the gold answer and candidate answer pairs' matching scores.

**Parameters**

- `reference_answer` (list of str): A list of gold (correct) answers to the question.
- `candidate_answer` (str): The answer provided by a candidate that needs to be evaluated.
- `question` (str): The question for which the answers are being evaluated.

**Returns**

- `dictionary`: A dictionary containing gold answers and the candidate answer's matching score.

#### 3. `evaluate`

Returns True if the candidate answer is a match of any of the gold answers.

**Parameters**

- `reference_answer` (list of str): A list of gold (correct) answers to the question.
- `candidate_answer` (str): The answer provided by a candidate that needs to be evaluated.
- `question` (str): The question for which the answers are being evaluated.

**Returns**

- `boolean`: A boolean True/False signifying matches between reference or candidate answers.


```python
from qa_metrics.pedant import PEDANT

question = "Which movie is loosley based off the Brother Grimm's Iron Henry?"
pedant = PEDANT()
scores = pedant.get_scores(reference_answer, candidate_answer, question)
max_pair, highest_scores = pedant.get_highest_score(reference_answer, candidate_answer, question)
match_result = pedant.evaluate(reference_answer, candidate_answer, question)
print("Max Pair: %s; Highest Score: %s" % (max_pair, highest_scores))
print("Score: %s; PANDA Match: %s" % (scores, match_result))
'''
Max Pair: ('the princess and the frog', 'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"'); Highest Score: 0.854451712151719
Score: {'the frog prince': {'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"': 0.7131625951317375}, 'the princess and the frog': {'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"': 0.854451712151719}}; PANDA Match: True
'''
```

```python
print(pedant.get_score(reference_answer[1], candidate_answer, question))
'''
0.7122460127464126
'''
```

## Transformer Neural Evaluation
Our fine-tuned BERT model is on 🤗 [Huggingface](https://huggingface.co/Zongxia/answer_equivalence_bert?text=The+goal+of+life+is+%5BMASK%5D.). Our Package also supports downloading and matching directly. [distilroberta](https://huggingface.co/Zongxia/answer_equivalence_distilroberta), [distilbert](https://huggingface.co/Zongxia/answer_equivalence_distilbert), [roberta](https://huggingface.co/Zongxia/answer_equivalence_roberta), and [roberta-large](https://huggingface.co/Zongxia/answer_equivalence_roberta-large) are also supported now! 🔥🔥🔥

#### `transformer_match`

Returns True if the candidate answer is a match of any of the gold answers.

**Parameters**

- `reference_answer` (list of str): A list of gold (correct) answers to the question.
- `candidate_answer` (str): The answer provided by a candidate that needs to be evaluated.
- `question` (str): The question for which the answers are being evaluated.

**Returns**

- `boolean`: A boolean True/False signifying matches between reference or candidate answers.

```python
from qa_metrics.transformerMatcher import TransformerMatcher

question = "Which movie is loosley based off the Brother Grimm's Iron Henry?"
# Supported models: roberta-large, roberta, bert, distilbert, distilroberta
tm = TransformerMatcher("roberta-large")
scores = tm.get_scores(reference_answer, candidate_answer, question)
match_result = tm.transformer_match(reference_answer, candidate_answer, question)
print("Score: %s; bert Match: %s" % (scores, match_result))
'''
Score: {'The Frog Prince': {'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"': 0.6934309}, 'The Princess and the Frog': {'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"': 0.7400551}}; TM Match: True
'''
```

## Prompting LLM For Evaluation

Note: The prompting function can be used for any prompting purposes.

###### OpenAI
```python
from qa_metrics.prompt_llm import CloseLLM
model = CloseLLM()
model.set_openai_api_key(YOUR_OPENAI_KEY)
prompt = 'question: What is the Capital of France?\nreference: Paris\ncandidate: The capital is Paris\nIs the candidate answer correct based on the question and reference answer? Please only output correct or incorrect.'
model.prompt_gpt(prompt=prompt, model_engine='gpt-3.5-turbo', temperature=0.1, max_tokens=10)

'''
'correct'
'''
```

###### Anthropic
```python
model = CloseLLM()
model.set_anthropic_api_key(YOUR_Anthropic_KEY)
model.prompt_claude(prompt=prompt, model_engine='claude-v1', anthropic_version="2023-06-01", max_tokens_to_sample=100, temperature=0.7)

'''
'correct'
'''
```

###### deepinfra (See below for descriptions of more models)
```python
from qa_metrics.prompt_open_llm import OpenLLM
model = OpenLLM()
model.set_deepinfra_key(YOUR_DEEPINFRA_KEY)
model.prompt(message=prompt, model_engine='mistralai/Mixtral-8x7B-Instruct-v0.1', temperature=0.1, max_tokens=10)

'''
'correct'
'''
```

If you find this repo avialable, please cite our paper:
```bibtex
@misc{li2024panda,
      title={PANDA (Pedantic ANswer-correctness Determination and Adjudication):Improving Automatic Evaluation for Question Answering and Text Generation}, 
      author={Zongxia Li and Ishani Mondal and Yijun Liang and Huy Nghiem and Jordan Lee Boyd-Graber},
      year={2024},
      eprint={2402.11161},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```


## Updates
- [01/24/24] 🔥 The full paper is uploaded and can be accessed [here](https://arxiv.org/abs/2402.11161). The dataset is expanded and leaderboard is updated.
- Our Training Dataset is adapted and augmented from [Bulian et al](https://github.com/google-research-datasets/answer-equivalence-dataset). Our [dataset repo](https://github.com/zli12321/Answer_Equivalence_Dataset.git) includes the augmented training set and QA evaluation testing sets discussed in our paper. 
- Now our model supports [distilroberta](https://huggingface.co/Zongxia/answer_equivalence_distilroberta), [distilbert](https://huggingface.co/Zongxia/answer_equivalence_distilbert), a smaller and more robust matching model than Bert!

## License

This project is licensed under the [MIT License](LICENSE.md) - see the LICENSE file for details.

## Contact

For any additional questions or comments, please contact [zli12321@umd.edu].