Zongxia commited on
Commit
a06fd8f
1 Parent(s): fc6868d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -12
README.md CHANGED
@@ -14,19 +14,8 @@ pipeline_tag: text-classification
14
  [![PyPI version qa-metrics](https://img.shields.io/pypi/v/qa-metrics.svg)](https://pypi.org/project/qa-metrics/)
15
 
16
 
17
- QA-Evaluation-Metrics is a fast and lightweight Python package for evaluating question-answering models. It provides various basic metrics to assess the performance of QA models. Check out our **CFMatcher**, a matching method going beyond token-level matching and is more efficient than LLM matchings but still retains competitive evaluation performance of transformer LLM models.
18
 
19
- If you find this repo avialable, please cite our paper:
20
- ```bibtex
21
- @misc{li2024cfmatch,
22
- title={CFMatch: Aligning Automated Answer Equivalence Evaluation with Expert Judgments For Open-Domain Question Answering},
23
- author={Zongxia Li and Ishani Mondal and Yijun Liang and Huy Nghiem and Jordan Boyd-Graber},
24
- year={2024},
25
- eprint={2401.13170},
26
- archivePrefix={arXiv},
27
- primaryClass={cs.CL}
28
- }
29
- ```
30
 
31
  ## Installation
32
 
@@ -85,6 +74,18 @@ match_result = cfm.cf_match(reference_answer, candidate_answer, question)
85
  print("Score: %s; CF Match: %s" % (scores, match_result))
86
  ```
87
 
 
 
 
 
 
 
 
 
 
 
 
 
88
  ## Updates
89
  - [01/24/24] 🔥 The full paper is uploaded and can be accessed [here]([https://arxiv.org/abs/2310.14566](https://arxiv.org/abs/2401.13170)). The dataset is expanded and leaderboard is updated.
90
  - Our Training Dataset is adapted and augmented from [Bulian et al](https://github.com/google-research-datasets/answer-equivalence-dataset). Our [dataset repo](https://github.com/zli12321/Answer_Equivalence_Dataset.git) includes the augmented training set and QA evaluation testing sets discussed in our paper.
 
14
  [![PyPI version qa-metrics](https://img.shields.io/pypi/v/qa-metrics.svg)](https://pypi.org/project/qa-metrics/)
15
 
16
 
17
+ QA-Evaluation-Metrics is a fast and lightweight Python package for evaluating question-answering models. It provides various basic metrics to assess the performance of QA models. Check out our paper [**CFMatcher**](https://arxiv.org/abs/2401.13170), a matching method going beyond token-level matching and is more efficient than LLM matchings but still retains competitive evaluation performance of transformer LLM models.
18
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
  ## Installation
21
 
 
74
  print("Score: %s; CF Match: %s" % (scores, match_result))
75
  ```
76
 
77
+ If you find this repo avialable, please cite:
78
+ ```bibtex
79
+ @misc{li2024cfmatch,
80
+ title={CFMatch: Aligning Automated Answer Equivalence Evaluation with Expert Judgments For Open-Domain Question Answering},
81
+ author={Zongxia Li and Ishani Mondal and Yijun Liang and Huy Nghiem and Jordan Boyd-Graber},
82
+ year={2024},
83
+ eprint={2401.13170},
84
+ archivePrefix={arXiv},
85
+ primaryClass={cs.CL}
86
+ }
87
+ ```
88
+
89
  ## Updates
90
  - [01/24/24] 🔥 The full paper is uploaded and can be accessed [here]([https://arxiv.org/abs/2310.14566](https://arxiv.org/abs/2401.13170)). The dataset is expanded and leaderboard is updated.
91
  - Our Training Dataset is adapted and augmented from [Bulian et al](https://github.com/google-research-datasets/answer-equivalence-dataset). Our [dataset repo](https://github.com/zli12321/Answer_Equivalence_Dataset.git) includes the augmented training set and QA evaluation testing sets discussed in our paper.