ViDove / evaluation /readme.md
JiaenLiu
Evaluation structure
b0198e8
|
raw
history blame
421 Bytes
Evaluation:
BLEU (https://github.com/mjpost/sacrebleu)
COMET (https://github.com/Unbabel/COMET)
LLM eval
Eval time stamp
Sep 18 - Sep 25
Proj-t
src
evaluation
- scores
- LLM_eval.py (jiaen)
- scores.py (wizard)
- comet
- sacrebleu
- alignment.py (david)
- evaluation.py (not assigned)
- results
- mmddyy-HMS-results.csv
- logs
entry:
Python3 evaluation/evaluation.py –pred path/to/pred –gt path/to/gt