mt5-small-dequad-qg / eval /metric.last.sentence.sentence_answer.question.asahi417_qg_dequad.default.json
asahi417's picture
model update
14df5f5
raw
history blame
471 Bytes
{"validation": {"Bleu_1": 0.10537824751392172, "Bleu_2": 0.044149261035308314, "Bleu_3": 0.018595574157800648, "Bleu_4": 8.439401263231011e-07, "METEOR": 0.10559185099848878, "ROUGE_L": 0.10277996419466001, "BERTScore": 0.7912326901398178}, "test": {"Bleu_1": 0.09790646807623696, "Bleu_2": 0.04132316934119506, "Bleu_3": 0.01725014115290599, "Bleu_4": 0.005883221906316307, "METEOR": 0.10755967365964325, "ROUGE_L": 0.09911581167878843, "BERTScore": 0.7846174814750657}}