File size: 3,445 Bytes
6a9ca32
8017442
6a9ca32
8017442
 
 
 
 
 
 
6a9ca32
8017442
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5a8590a
8017442
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5a8590a
8017442
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
inference: false
license: mit
language:
- en
metrics:
- exact_match
- f1
- bertscore
pipeline_tag: text-classification
---
# QA-Evaluation-Metrics

[![PyPI version qa-metrics](https://img.shields.io/pypi/v/qa-metrics.svg)](https://pypi.org/project/qa-metrics/) 


QA-Evaluation-Metrics is a fast and lightweight Python package for evaluating question-answering models. It provides various basic metrics to assess the performance of QA models. Check out our **CFMatcher**, a matching method going beyond token-level matching and is more efficient than LLM matchings but still retains competitive evaluation performance of transformer LLM models.

If you find this repo avialable, please cite our paper:
```bibtex
@misc{li2024cfmatch,
  title={CFMatch: Aligning Automated Answer Equivalence Evaluation with Expert Judgments For Open-Domain Question Answering}, 
  author={Zongxia Li and Ishani Mondal and Yijun Liang and Huy Nghiem and Jordan Boyd-Graber},
  year={2024},
  eprint={2401.13170},
  archivePrefix={arXiv},
  primaryClass={cs.CL}
}
```

## Installation

To install the package, run the following command:

```bash
pip install qa-metrics
```

## Usage

The python package currently provides four QA evaluation metrics.

#### Exact Match
```python
from qa_metrics.em import em_match

reference_answer = ["Charles , Prince of Wales"]
candidate_answer = "Prince Charles"
match_result = em_match(reference_answer, candidate_answer)
print("Exact Match: ", match_result)
```

#### Transformer Match
Our fine-tuned BERT model is this repository. Our Package also supports downloading and matching directly. More Matching transformer models will be available 🔥🔥🔥

```python
from qa_metrics.transformerMatcher import TransformerMatcher

question = "who will take the throne after the queen dies"
tm = TransformerMatcher("distilroberta")
scores = tm.get_scores(reference_answer, candidate_answer, question)
match_result = tm.transformer_match(reference_answer, candidate_answer, question)
print("Score: %s; CF Match: %s" % (scores, match_result))
```

#### F1 Score
```python
from qa_metrics.f1 import f1_match,f1_score_with_precision_recall

f1_stats = f1_score_with_precision_recall(reference_answer[0], candidate_answer)
print("F1 stats: ", f1_stats)

match_result = f1_match(reference_answer, candidate_answer, threshold=0.5)
print("F1 Match: ", match_result)
```

#### CFMatch
```python
from qa_metrics.cfm import CFMatcher

question = "who will take the throne after the queen dies"
cfm = CFMatcher()
scores = cfm.get_scores(reference_answer, candidate_answer, question)
match_result = cfm.cf_match(reference_answer, candidate_answer, question)
print("Score: %s; CF Match: %s" % (scores, match_result))
```

## Updates
- [01/24/24] 🔥 The full paper is uploaded and can be accessed [here]([https://arxiv.org/abs/2310.14566](https://arxiv.org/abs/2401.13170)). The dataset is expanded and leaderboard is updated.
- Our Training Dataset is adapted and augmented from [Bulian et al](https://github.com/google-research-datasets/answer-equivalence-dataset). Our [dataset repo](https://github.com/zli12321/Answer_Equivalence_Dataset.git) includes the augmented training set and QA evaluation testing sets discussed in our paper. 


## License

This project is licensed under the [MIT License](LICENSE.md) - see the LICENSE file for details.

## Contact

For any additional questions or comments, please contact [zli12321@umd.edu].