sariola commited on
Commit
b969bb8
1 Parent(s): 9b37988

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +754 -0
README.md ADDED
@@ -0,0 +1,754 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - lm-judge
7
+ - phi3
8
+ - evaluation
9
+ - nlp
10
+ - conversational
11
+ - llamacpp
12
+ pipeline_tag: text-generation
13
+ library_name: transformers
14
+ metrics:
15
+ - accuracy
16
+ - f1
17
+ - precision
18
+ - recall
19
+ - pearsonr
20
+ - spearmanr
21
+ - kendall-tau
22
+ model_name: Flow-Judge-v0.1-GGUF
23
+ base_model: microsoft/Phi-3.5-mini-instruct
24
+ inference: false
25
+ model_creator: Flow AI
26
+ model_type: phi3.5
27
+ quantized_by: Flow AI
28
+ ---
29
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63368577d184e6b53c50e6d0/6kSJKgPh2pDh4tA-Ky0xW.png)
30
+
31
+ # Flow-Judge-v0.1-GGUF
32
+ - Original model: [Flow-Judge-v0.1](https://huggingface.co/flowaicom/Flow-Judge-v0.1)
33
+ - Model collection: [Flow-Judge-v0.1 models](https://huggingface.co/collections/flowaicom/flow-judge-v01-66e6af5fc3b3a128bde07dec)
34
+ - Technical report: [Flow Judge: An Open Small Language Model for LLM System Evaluations](https://huggingface.co/flowaicom/Flow-Judge-v0.1)
35
+ - Model website: [flow-ai.com/judge](https://www.flow-ai.com/blog/flow-judge)
36
+ - About us: [Flow AI](https://www.flow-ai.com/about)
37
+
38
+ <!-- description start -->
39
+ ## Description
40
+
41
+ This repo contains GGUF quants for [Flow-Judge-v0.1](https://huggingface.co/flowaicom/Flow-Judge-v0.1).
42
+
43
+ ## Quantization config
44
+
45
+ TBD
46
+
47
+ ## Running the GGUF file
48
+
49
+ TBD
50
+
51
+ # Original model card: Flow-Judge-v0.1
52
+
53
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63368577d184e6b53c50e6d0/NgFJqVmUgrhOnphd47VEm.png)
54
+
55
+ <div class="center-content">
56
+ <div class="links">
57
+ <a href="https://github.com/flowaicom/flow-judge">flow-judge library</a>
58
+ |
59
+ <a href="https://www.flow-ai.com/blog/flow-judge">Technical report</a>
60
+ </div>
61
+ </div>
62
+
63
+ ## Model Summary
64
+
65
+ Flow-Judge-v0.1 is a compact yet powerful 3.8B model that offers customizable LLM system evaluations across various fields. The model inherits it's architecture from Phi-3.5-mini instruct model which enables Flow-Judge to deliver high-quality results while maintaining a small footprint. Despite its smaller size, it achieves performance comparable to larger models in both held-out and out-of-domain benchmarks. Flow-Judge-v0.1 supports multiple scoring scales, provides qualitative feedback, and generates structured evaluation outputs. Trained on a smaller synthetic dataset, it represents an efficient approach to AI development. Released under the Apache 2.0 license, Flow Judge is an open and accessible model suitable for developers and companies seeking cost-effective and rapid evaluations using custom rubrics.
66
+
67
+ __More information__
68
+ - [Flow Judge website](https://www.flow-ai.com/judge)
69
+ - [Technical report](https://www.flow-ai.com/blog/flow-judge)
70
+ - [Github repo](https://github.com/flowaicom/flow-judge)
71
+
72
+ __Quantized weights__
73
+ - [flowaicom/Flow-Judge-v0.1-AWQ](https://huggingface.co/flowaicom/Flow-Judge-v0.1-AWQ)
74
+ - [flowaicom/Flow-Judge-v0.1-GGUF](https://huggingface.co/flowaicom/Flow-Judge-v0.1-GGUF)
75
+
76
+ __Quickstart__
77
+ - [Quickstart](https://github.com/flowaicom/flow-judge/examples/1_quickstart.ipynb)
78
+
79
+ ## Intended Use Case
80
+ Flow Judge is intended to be used on custom LLM system evaluation tasks.
81
+
82
+ - Customizable evaluations: Users can define their own evaluation criteria and rubrics, tailoring Flow Judge to their specific needs and requirements. This flexibility allows for the creation of highly targeted assessments that accurately measure performance of their LLM system
83
+
84
+ - Flow Judge supports three different scoring scales:
85
+ - Pass/fail: Suitable for binary assessments, such as determining whether a piece of text meets a specific standard or contains errors.
86
+ - 3-Likert: Allows for more granular evaluations, with scores ranging from negative to neutral to positive. Useful for assessing the overall quality or sentiment of a piece of text.
87
+ - 5-Likert: Provides an even more nuanced assessment, with scores ranging from strongly negative to strongly positive, enabling users to capture subtle differences in quality or sentiment.
88
+
89
+ - Easy to interpret results:
90
+ - Flow Judge produces structured evaluations with <feedback> and <score> tags.
91
+ - Qualitative feedback: Flow Judge detects errors and grades outputs and provides qualitative feedback that explains its reasoning for assigning a particular score from the rubric while highlighting problematic parts of the responses.
92
+ - Score: Based on a grading rubric Flow Judge will return a numerical score on binary, likert-3 or likert-5 scale.
93
+
94
+ ## Training
95
+
96
+ ### Model
97
+
98
+ Flow Judge is based on the Phi-3.5-mini architecture, and the base model checkpoint used is specifically its instruct version. The model uses the same tokenizer, supports MQA and Flash Attention 2, and has weights in bfloat16 precision. However, post-finetuning, the model's support for languages and long context lengths has not been fully tested. Due to specialized Supervised Fine-Tuning (SFT), Flow Judge might show different benchmark results and support a maximum context length of 8192, shorter than the base model's.
99
+
100
+
101
+ ### Training Datasets
102
+
103
+ Flow-Judge-v0.1 has been trained on synthetically generated datasets. The construction of training datasets for Flow Judge involves a multi-step process:
104
+
105
+ 1. Manually curating seed rubrics to serve as a foundation
106
+ 2. Synthetically generating domain-adapted metrics and rubrics for various domains
107
+ 3. Synthetically generating training instances with multiple inputs, such as user queries and contextual information
108
+ 4. Employing a dual-evaluation strategy with consensus to ensure quality and consistency
109
+
110
+ This process creates a comprehensive and diverse set of training instances that enable accurate, domain-specific evaluations of LLM systems in generative AI products while minimizing human intervention.
111
+
112
+ Read more about the dataset construction from [here](https://www.flow-ai.com/blog/flow-judge)
113
+
114
+
115
+ ### Fine-tuning
116
+
117
+ For fine-tuning we used Axolotl's preprocessing to ensure input training data is consistent. We then conducted supervised fine-tuning based on microsoft/Phi-3.5-mini-instruct using RSLoRa. More detailed information about the fine-tuning process is provided in our [technical report](https://www.flow-ai.com/blog/flow-judge).
118
+
119
+ ## Usage
120
+
121
+ ### Prompt format
122
+
123
+ #### Prompt template with inputs
124
+ ```text
125
+ # GOAL
126
+ Your job is to evaluate a task carried out by an AI system powered by a large language model.
127
+ You will be provided with the inputs and output of the task, as well as the evaluation criteria and scoring rubric. Your task is to evaluate the output of the AI system based on the evaluation criteria and scoring rubric provided.
128
+
129
+ # INPUT
130
+ Below are the inputs required for performing the task:
131
+ <inputs>
132
+ {INPUTS}
133
+ </inputs>
134
+
135
+ # OUTPUT
136
+ Below is the output of the task:
137
+ <output>
138
+ {OUTPUT}
139
+ </output>
140
+
141
+ # EVALUATION CRITERIA AND SCORING RUBRIC
142
+ Here are the evaluation criteria and the rubric that you need to use for evaluating the task:
143
+ <evaluation_criteria>
144
+ {EVALUATION_CRITERIA}
145
+ </evaluation_criteria>
146
+
147
+ <scoring_rubric>
148
+ {RUBRIC}
149
+ </scoring_rubric>
150
+
151
+ # INSTRUCTIONS FOR THE EVALUATION
152
+ 1. Understand the task and criteria: Familiarize yourself with the task to be evaluated. Review the evaluation criteria and scoring rubric to understand the different levels of performance and the descriptions for each score.
153
+ 2. Review the inputs and output: Look at the inputs provided for the task. Examine the output generated from completing the task.
154
+ 3. Compare output to score descriptions: Compare the output against the criteria and score descriptions in the scoring rubric. For each criterion,decide which description best matches the output.
155
+ 4. After comparing the output to the score descriptions, pay attention to the small details that might impact the final score that you assign. Sometimes a small difference can dictate the final score.
156
+ 5. Write verbal feedback justifying your evaluation that includes a detailed rationale, referring to specific aspects of the output and comparing them to the rubric.
157
+ 6. Assign a final score based on the scoring rubric.
158
+
159
+ ## FORMAT FOR THE EVALUATION
160
+ - Write the verbal feedback inside <feedback> tags without any additional surrounding text.
161
+ - Write the numeric score inside <score> tags, without any additional surrounding text and always after the feedback.
162
+
163
+ Please accurately evaluate the task. Strictly adhere to the evaluation criteria and rubric.
164
+ ```
165
+
166
+ #### Prompt template without inputs
167
+ ```text
168
+ # GOAL
169
+ Your job is to evaluate a task carried out by an AI system powered by a large language model.
170
+
171
+ You will be provided the output of the task, as well as the evaluation criteria and scoring rubric. Your task is to evaluate the output of the AI system based on the evaluation criteria and scoring rubric provided.
172
+
173
+ # OUTPUT
174
+ Below is the output of the task:
175
+ <output>
176
+ {OUTPUT}
177
+ </output>
178
+
179
+ # EVALUATION CRITERIA AND SCORING RUBRIC
180
+ Here are the evaluation criteria and the rubric that you need to use for evaluating the task:
181
+ <evaluation_criteria>
182
+ {EVALUATION_CRITERIA}
183
+ </evaluation_criteria>
184
+
185
+ <scoring_rubric>
186
+ {RUBRIC}
187
+ </scoring_rubric>
188
+
189
+ # INSTRUCTIONS FOR THE EVALUATION
190
+ 1. Understand the task and criteria: Familiarize yourself with the task to be evaluated. Review the evaluation criteria and scoring rubric to understand the different levels of performance and the descriptions for each score.
191
+ 2. Review the output: Examine the output generated from completing the task.
192
+ 3. Compare output to score descriptions: Compare the output against the criteria and score descriptions in the scoring rubric. For each criterion,decide which description best matches the output.
193
+ 4. After comparing the output to the score descriptions, pay attention to the small details that might impact the final score that you assign. Sometimes a small difference can dictate the final score.
194
+ 5. Write verbal feedback justifying your evaluation that includes a detailed rationale, referring to specific aspects of the output and comparing them to the rubric.
195
+ 6. Assign a final score based on the scoring rubric.
196
+
197
+ ## FORMAT FOR THE EVALUATION
198
+ - Write the verbal feedback inside <feedback> tags without any additional surrounding text.
199
+ - Write the numeric score inside <score> tags, without any additional surrounding text and always after the feedback.
200
+
201
+ Please accurately evaluate the task. Strictly adhere to the evaluation criteria and rubric.
202
+ ```
203
+
204
+ #### Formatted prompt example with inputs
205
+ ```text
206
+ # GOAL
207
+ Your job is to evaluate a task carried out by an AI system powered by a large language model.
208
+ You will be provided with the inputs and output of the task, as well as the evaluation criteria and scoring rubric. Your task is to evaluate the output of the AI system based on the evaluation criteria and scoring rubric provided.
209
+ # INPUT/s
210
+ Below are the inputs required for performing the task:
211
+ <inputs>
212
+ <customer_query>
213
+ Subject: Issues with recent order #12345
214
+ Hello,
215
+ I recently received my order #12345, but I'm having a few problems:
216
+ 1. The blue t-shirt I ordered is actually green. Can I exchange it?
217
+ 2. The phone case doesn't fit my iPhone 13 Pro Max. The website said it was compatible.
218
+ 3. I was charged for express shipping, but it took over a week to arrive. Can I get a refund on the shipping cost?
219
+ Please let me know how we can resolve these issues. I've been a loyal customer for years and this is the first time I've had problems like this.
220
+ Thank you,
221
+ Jane Smith
222
+ </customer_query>
223
+ </inputs>
224
+
225
+ # OUTPUT
226
+ Below is the output of the task:
227
+ <output>
228
+ Dear Jane Smith,
229
+ Thank you for reaching out about the issues with your recent order #12345. We sincerely apologize for the inconvenience you've experienced. Let me address each of your concerns:
230
+ 1. T-shirt color discrepancy: We're sorry about the mix-up with the t-shirt color. We'd be happy to exchange it for the correct blue t-shirt. I'll send you a prepaid return label for the green shirt, and we'll ship out the blue one as soon as we receive the return.
231
+ 2. Phone case compatibility: We apologize for the confusion regarding the phone case compatibility. Our website information should have been more accurate. We'll gladly accept a return of the incompatible case and issue a full refund for it. Additionally, I can recommend a case that will fit your iPhone 13 Pro Max if you're interested.
232
+ 3. Shipping charge: You're absolutely right about the shipping charge. Since the delivery didn't meet our express shipping standards, we will refund the entire shipping cost to your original payment method. This refund should process within 3-5 business days.
233
+ To initiate these resolutions, please reply to this email confirming that you'd like to proceed with the t-shirt exchange and phone case return. Once confirmed, I'll send you the necessary return labels and process the shipping refund immediately.
234
+ We truly value your loyalty as a long-time customer and sincerely regret that you've encountered these issues. Rest assured, we're taking steps to prevent similar problems in the future. As a gesture of goodwill, we'd like to offer you a 20% discount on your next order.
235
+ If you have any further questions or concerns, please don't hesitate to reach out. We're here to ensure your complete satisfaction.
236
+ Best regards,
237
+ Alex Johnson
238
+ Customer Service Representative
239
+ </output>
240
+
241
+ # EVALUATION CRITERIA AND SCORING RUBRIC
242
+ Here are the evaluation criteria and the rubric that you need to use for evaluating the task:
243
+ <evaluation_criteria>
244
+ How well the response addresses the specific issues raised in the customer's query?
245
+ </evaluation_criteria>
246
+ <scoring_rubric>
247
+ - Score 1: The response completely fails to address the customer's needs and ignores the specific issues raised.
248
+ - Score 2: The response barely addresses the customer's query and misses most of the specific issues raised.
249
+ - Score 3: The response partially addresses the customer's query, touching on some of the specific issues but leaving others unaddressed.
250
+ - Score 4: The response adequately addresses most aspects of the customer's query and the specific issues raised.
251
+ - Score 5: The response fully and comprehensively addresses all aspects of the customer's query and all specific issues raised in a highly satisfactory manner.
252
+ </scoring_rubric>
253
+
254
+ # INSTRUCTIONS FOR THE EVALUATION
255
+ 1. Understand the task and criteria: Familiarize yourself with the task to be evaluated. Review the evaluation criteria and scoring rubric to understand the different levels of performance and the descriptions for each score.
256
+ 2. Review the inputs and output: Look at the inputs provided for the task. Examine the output generated from completing the task.
257
+ 3. Compare output to score descriptions: Compare the output against the criteria and score descriptions in the scoring rubric. For each criterion,decide which description best matches the output.
258
+ 4. After comparing the output to the score descriptions, pay attention to the small details that might impact the final score that you assign. Sometimes a small difference can dictate the final score.
259
+ 5. Write verbal feedback justifying your evaluation that includes a detailed rationale, referring to specific aspects of the output and comparing them to the rubric.
260
+ 6. Assign a final score based on the scoring rubric.
261
+
262
+ ## FORMAT FOR THE EVALUATION
263
+ - Write the verbal feedback inside <feedback> tags without any additional surrounding text.
264
+ - Write the numeric score inside <score> tags, without any additional surrounding text and always after the feedback.
265
+ Please accurately evaluate the task. Strictly adhere to the evaluation criteria and rubric.
266
+ ```
267
+ >Note that inputs and output are formatted with XML tags. See [flow-judge](https://github.com/flowaicom/flow-judge) repository formatting functions for more details.
268
+
269
+ ### Inference
270
+
271
+ Evaluations can easily be run using our [flow-judge](https://github.com/flowaicom/flow-judge) library. It currently supports both Transformers and vllm engine.
272
+
273
+ To run Flow Judge efficiently, ensure your hardware meets the following requirements:
274
+
275
+ - Modern GPU with at least 4 GB VRAM (e.g., NVIDIA RTX series)
276
+ - Minimum of 8 GB of system memory
277
+ - At least 10GB of free storage for model files and dependencies.
278
+
279
+ ## Evaluation
280
+ ### Held-out test sets
281
+
282
+ <table border="1" cellpadding="10" cellspacing="0" style="border-collapse: collapse; width: auto;">
283
+ <thead>
284
+ <tr>
285
+ <th rowspan="2" style="text-align: left;">Evaluator</th>
286
+ <th colspan="3" style="text-align: center;">Pass / Fail Held-out Test set</th>
287
+ </tr>
288
+ <tr>
289
+ <th style="text-align: center;">Precision</th>
290
+ <th style="text-align: center;">Recall</th>
291
+ <th style="text-align: center;">F1</th>
292
+ </tr>
293
+ </thead>
294
+ <tbody>
295
+ <tr>
296
+ <td style="text-align: left;">microsoft/Phi-3.5-mini-instruct</td>
297
+ <td style="text-align: center;">0.685</td>
298
+ <td style="text-align: center;"><strong>1.000</strong></td>
299
+ <td style="text-align: center;">0.813</td>
300
+ </tr>
301
+ <tr>
302
+ <td style="text-align: left;">meta-llama/Meta-Llama-3.1-8B-Instruct</td>
303
+ <td style="text-align: center;"><u>0.870</u></td>
304
+ <td style="text-align: center;">0.982</td>
305
+ <td style="text-align: center;"><u>0.923</u></td>
306
+ </tr>
307
+ <tr>
308
+ <td style="text-align: left;">mistralai/Mistral-Nemo-Instruct-2407</td>
309
+ <td style="text-align: center;">0.709</td>
310
+ <td style="text-align: center;"><u>0.994</u></td>
311
+ <td style="text-align: center;">0.827</td>
312
+ </tr>
313
+ <tr>
314
+ <td style="text-align: left;">gpt-4o-mini</td>
315
+ <td style="text-align: center;">0.834</td>
316
+ <td style="text-align: center;">1.000</td>
317
+ <td style="text-align: center;">0.910</td>
318
+ </tr>
319
+ <tr>
320
+ <td style="text-align: left;">flowaicom/Flow-Judge-v0.1</td>
321
+ <td style="text-align: center;"><strong>0.940</strong></td>
322
+ <td style="text-align: center;">0.972</td>
323
+ <td style="text-align: center;"><strong>0.955</strong></td>
324
+ </tr>
325
+ </tbody>
326
+ </table>
327
+
328
+ <table border="1" cellpadding="10" cellspacing="0" style="border-collapse: collapse; width: auto;">
329
+ <thead>
330
+ <tr>
331
+ <th rowspan="2" style="text-align: left;">Evaluator</th>
332
+ <th colspan="3" style="text-align: center;">3-Likert Held-out Test set</th>
333
+ <th colspan="3" style="text-align: center;">5-Likert Held-out Test set</th>
334
+ </tr>
335
+ <tr>
336
+ <th style="text-align: center;">pearsonr</th>
337
+ <th style="text-align: center;">spearmanr</th>
338
+ <th style="text-align: center;">kendall-tau</th>
339
+ <th style="text-align: center;">pearsonr</th>
340
+ <th style="text-align: center;">spearmanr</th>
341
+ <th style="text-align: center;">kendall-tau</th>
342
+ </tr>
343
+ </thead>
344
+ <tbody>
345
+ <tr>
346
+ <td style="text-align: left;">microsoft/Phi-3.5-mini-instruct</td>
347
+ <td style="text-align: center;">0.756</td>
348
+ <td style="text-align: center;">0.749</td>
349
+ <td style="text-align: center;">0.695</td>
350
+ <td style="text-align: center;">0.808</td>
351
+ <td style="text-align: center;">0.819</td>
352
+ <td style="text-align: center;">0.739</td>
353
+ </tr>
354
+ <tr>
355
+ <td style="text-align: left;">prometheus-eval/prometheus-7b-v2.0*</td>
356
+ <td style="text-align: center;">-</td>
357
+ <td style="text-align: center;">-</td>
358
+ <td style="text-align: center;">-</td>
359
+ <td style="text-align: center;"><u>0.910</u></td>
360
+ <td style="text-align: center;"><u>0.908</u></td>
361
+ <td style="text-align: center;"><u>0.838</u></td>
362
+ </tr>
363
+ <tr>
364
+ <td style="text-align: left;">meta-llama/Meta-Llama-3.1-8B-Instruct</td>
365
+ <td style="text-align: center;"><u>0.836</u></td>
366
+ <td style="text-align: center;"><u>0.833</u></td>
367
+ <td style="text-align: center;"><u>0.789</u></td>
368
+ <td style="text-align: center;">0.854</td>
369
+ <td style="text-align: center;">0.868</td>
370
+ <td style="text-align: center;">0.791</td>
371
+ </tr>
372
+ <tr>
373
+ <td style="text-align: left;">mistralai/Mistral-Nemo-Instruct-2407</td>
374
+ <td style="text-align: center;">0.813</td>
375
+ <td style="text-align: center;">0.807</td>
376
+ <td style="text-align: center;">0.758</td>
377
+ <td style="text-align: center;">0.870</td>
378
+ <td style="text-align: center;">0.867</td>
379
+ <td style="text-align: center;">0.789</td>
380
+ </tr>
381
+ <tr>
382
+ <td style="text-align: left;">gpt-4o-mini</td>
383
+ <td style="text-align: center;">0.890</td>
384
+ <td style="text-align: center;">0.888</td>
385
+ <td style="text-align: center;">0.851</td>
386
+ <td style="text-align: center;">0.923</td>
387
+ <td style="text-align: center;">0.923</td>
388
+ <td style="text-align: center;">0.864</td>
389
+ </tr>
390
+ <tr>
391
+ <td style="text-align: left;">flowaicom/Flow-Judge-v0.1</td>
392
+ <td style="text-align: center;"><strong>0.888</strong></td>
393
+ <td style="text-align: center;"><strong>0.888</strong></td>
394
+ <td style="text-align: center;"><strong>0.852</strong></td>
395
+ <td style="text-align: center;"><strong>0.919</strong></td>
396
+ <td style="text-align: center;"><strong>0.919</strong></td>
397
+ <td style="text-align: center;"><strong>0.856</strong></td>
398
+ </tr>
399
+ </tbody>
400
+ </table>
401
+
402
+ \* _not suitable for 3 likert_
403
+
404
+
405
+ ### RAGTruth
406
+ <table border="1" cellpadding="10" cellspacing="0" style="border-collapse: collapse; width: auto;">
407
+ <tr>
408
+ <th rowspan="2" style="text-align: left;">Evaluator</th>
409
+ <th colspan="3" style="text-align:center;">RAGTruth QA</th>
410
+ <th colspan="3" style="text-align:center;">RAGTruth Data-to-Text</th>
411
+ <th colspan="3" style="text-align:center;">RAGTruth Summarization</th>
412
+ </tr>
413
+ <tr>
414
+ <th style="text-align:center;">Precision</th>
415
+ <th style="text-align:center;">Recall</th>
416
+ <th style="text-align:center;">F1</th>
417
+ <th style="text-align:center;">Precision</th>
418
+ <th style="text-align:center;">Recall</th>
419
+ <th style="text-align:center;">F1</th>
420
+ <th style="text-align:center;">Precision</th>
421
+ <th style="text-align:center;">Recall</th>
422
+ <th style="text-align:center;">F1</th>
423
+ </tr>
424
+ <tr>
425
+ <td>microsoft/Phi-3.5-mini-instruct</td>
426
+ <td style="text-align:center;">0.817</td>
427
+ <td style="text-align:center;">0.963</td>
428
+ <td style="text-align:center;">0.884</td>
429
+ <td style="text-align:center;">0.356</td>
430
+ <td style="text-align:center;"><strong>1.000</strong></td>
431
+ <td style="text-align:center;">0.525</td>
432
+ <td style="text-align:center;">0.776</td>
433
+ <td style="text-align:center;"><strong>1.000</strong></td>
434
+ <td style="text-align:center;"><strong>0.874</strong></td>
435
+ </tr>
436
+ <tr>
437
+ <td>meta-llama/Meta-Llama-3.1-8B-Instruct</td>
438
+ <td style="text-align:center;"><strong>0.844</strong></td>
439
+ <td style="text-align:center;"><u>0.986</u></td>
440
+ <td style="text-align:center;"><strong>0.910</strong></td>
441
+ <td style="text-align:center;">0.382</td>
442
+ <td style="text-align:center;">0.537</td>
443
+ <td style="text-align:center;">0.447</td>
444
+ <td style="text-align:center;"><u>0.797</u></td>
445
+ <td style="text-align:center;"><u>0.940</u></td>
446
+ <td style="text-align:center;">0.863</td>
447
+ </tr>
448
+ <tr>
449
+ <td>mistralai/Mistral-Nemo-Instruct-2407</td>
450
+ <td style="text-align:center;">0.821</td>
451
+ <td style="text-align:center;"><strong>0.995</strong></td>
452
+ <td style="text-align:center;"><u>0.900</u></td>
453
+ <td style="text-align:center;">0.357</td>
454
+ <td style="text-align:center;"><strong>1.000</strong></td>
455
+ <td style="text-align:center;">0.526</td>
456
+ <td style="text-align:center;">0.775</td>
457
+ <td style="text-align:center;"><strong>1.000</strong></td>
458
+ <td style="text-align:center;"><u>0.873</u></td>
459
+ </tr>
460
+ <tr>
461
+ <td>gpt-4o-mini</td>
462
+ <td style="text-align:center;">0.830</td>
463
+ <td style="text-align:center;">0.966</td>
464
+ <td style="text-align:center;">0.893</td>
465
+ <td style="text-align:center;">0.398</td>
466
+ <td style="text-align:center;">0.994</td>
467
+ <td style="text-align:center;">0.569</td>
468
+ <td style="text-align:center;">0.786</td>
469
+ <td style="text-align:center;">0.997</td>
470
+ <td style="text-align:center;">0.879</td>
471
+ </tr>
472
+ <tr>
473
+ <td>Luna*</td>
474
+ <td style="text-align:center;">0.378</td>
475
+ <td style="text-align:center;">0.800</td>
476
+ <td style="text-align:center;">0.513</td>
477
+ <td style="text-align:center;">0.649</td>
478
+ <td style="text-align:center;">0.912</td>
479
+ <td style="text-align:center;"><u>0.759</u></td>
480
+ <td style="text-align:center;">0.400</td>
481
+ <td style="text-align:center;">0.765</td>
482
+ <td style="text-align:center;">0.525</td>
483
+ </tr>
484
+ <tr>
485
+ <td>RAGAS Faithfuless*</td>
486
+ <td style="text-align:center;">0.312</td>
487
+ <td style="text-align:center;">0.419</td>
488
+ <td style="text-align:center;">0.357</td>
489
+ <td style="text-align:center;"><strong>0.792</strong></td>
490
+ <td style="text-align:center;">0.508</td>
491
+ <td style="text-align:center;">0.619</td>
492
+ <td style="text-align:center;">0.642</td>
493
+ <td style="text-align:center;">0.299</td>
494
+ <td style="text-align:center;">0.408</td>
495
+ </tr>
496
+ <tr>
497
+ <td>Trulens Groundedness*</td>
498
+ <td style="text-align:center;">0.228</td>
499
+ <td style="text-align:center;">0.925</td>
500
+ <td style="text-align:center;">0.366</td>
501
+ <td style="text-align:center;"><u>0.669</u></td>
502
+ <td style="text-align:center;"><u>0.965</u></td>
503
+ <td style="text-align:center;"><strong>0.790</strong></td>
504
+ <td style="text-align:center;">0.402</td>
505
+ <td style="text-align:center;">0.500</td>
506
+ <td style="text-align:center;">0.445</td>
507
+ </tr>
508
+ <tr>
509
+ <td>flowaicom/Flow-Judge-v0.1</td>
510
+ <td style="text-align:center;"><u>0.835</u></td>
511
+ <td style="text-align:center;">0.961</td>
512
+ <td style="text-align:center;">0.894</td>
513
+ <td style="text-align:center;">0.541</td>
514
+ <td style="text-align:center;">0.249</td>
515
+ <td style="text-align:center;">0.341</td>
516
+ <td style="text-align:center;"><strong>0.834</strong></td>
517
+ <td style="text-align:center;">0.836</td>
518
+ <td style="text-align:center;">0.835</td>
519
+ </tr>
520
+ </table>
521
+
522
+ \* _reported in Galileo luna paper_
523
+
524
+
525
+ ### HaluEval, Covid-QA, PubMedQA
526
+ <table border="1" cellpadding="10" cellspacing="0" style="border-collapse: collapse; width: auto;">
527
+ <thead>
528
+ <tr>
529
+ <th rowspan="2" style="text-align: left;">Evaluator</th>
530
+ <th colspan="4" style="text-align: center;">HaluEval</th>
531
+ <th colspan="4" style="text-align: center;">Covid-QA</th>
532
+ <th colspan="4" style="text-align: center;">PubMedQA</th>
533
+ </tr>
534
+ <tr>
535
+ <th style="text-align: center;">Precision</th>
536
+ <th style="text-align: center;">Recall</th>
537
+ <th style="text-align: center;">F1</th>
538
+ <th style="text-align: center;">Accuracy</th>
539
+ <th style="text-align: center;">Precision</th>
540
+ <th style="text-align: center;">Recall</th>
541
+ <th style="text-align: center;">F1</th>
542
+ <th style="text-align: center;">Accuracy</th>
543
+ <th style="text-align: center;">Precision</th>
544
+ <th style="text-align: center;">Recall</th>
545
+ <th style="text-align: center;">F1</th>
546
+ <th style="text-align: center;">Accuracy</th>
547
+ </tr>
548
+ </thead>
549
+ <tbody>
550
+ <tr>
551
+ <td style="text-align: left;">microsoft/Phi-3.5-mini-instruct</td>
552
+ <td style="text-align: center;">0.730</td>
553
+ <td style="text-align: center;"><u>0.914</u></td>
554
+ <td style="text-align: center;">0.812</td>
555
+ <td style="text-align: center;">0.788</td>
556
+ <td style="text-align: center;">0.617</td>
557
+ <td style="text-align: center;">0.964</td>
558
+ <td style="text-align: center;">0.752</td>
559
+ <td style="text-align: center;">0.681</td>
560
+ <td style="text-align: center;">0.623</td>
561
+ <td style="text-align: center;"><u>0.986</u></td>
562
+ <td style="text-align: center;">0.764</td>
563
+ <td style="text-align: center;">0.696</td>
564
+ </tr>
565
+ <tr>
566
+ <td style="text-align: left;">meta-llama/Meta-Llama-3.1-8B-Instruct</td>
567
+ <td style="text-align: center;"><strong>0.864</strong></td>
568
+ <td style="text-align: center;">0.891</td>
569
+ <td style="text-align: center;"><strong>0.878</strong></td>
570
+ <td style="text-align: center;"><u>0.874</u></td>
571
+ <td style="text-align: center;"><u>0.663</u></td>
572
+ <td style="text-align: center;"><u>0.976</u></td>
573
+ <td style="text-align: center;"><u>0.790</u></td>
574
+ <td style="text-align: center;">0.734</td>
575
+ <td style="text-align: center;"><u>0.681</u></td>
576
+ <td style="text-align: center;">0.962</td>
577
+ <td style="text-align: center;"><strong>0.797</strong></td>
578
+ <td style="text-align: center;">0.750</td>
579
+ </tr>
580
+ <tr>
581
+ <td style="text-align: left;">mistralai/Mistral-Nemo-Instruct-2407</td>
582
+ <td style="text-align: center;">0.655</td>
583
+ <td style="text-align: center;"><strong>0.993</strong></td>
584
+ <td style="text-align: center;">0.789</td>
585
+ <td style="text-align: center;">0.735</td>
586
+ <td style="text-align: center;">0.651</td>
587
+ <td style="text-align: center;"><strong>0.982</strong></td>
588
+ <td style="text-align: center;">0.783</td>
589
+ <td style="text-align: center;">0.728</td>
590
+ <td style="text-align: center;">0.602</td>
591
+ <td style="text-align: center;"><strong>0.994</strong></td>
592
+ <td style="text-align: center;"><u>0.750</u></td>
593
+ <td style="text-align: center;">0.669</td>
594
+ </tr>
595
+ <tr>
596
+ <td style="text-align: left;">gpt-4o-mini</td>
597
+ <td style="text-align: center;">0.846</td>
598
+ <td style="text-align: center;">0.940</td>
599
+ <td style="text-align: center;">0.891</td>
600
+ <td style="text-align: center;">0.885</td>
601
+ <td style="text-align: center;">0.795</td>
602
+ <td style="text-align: center;">0.964</td>
603
+ <td style="text-align: center;">0.872</td>
604
+ <td style="text-align: center;">0.858</td>
605
+ <td style="text-align: center;">0.791</td>
606
+ <td style="text-align: center;">0.904</td>
607
+ <td style="text-align: center;">0.843</td>
608
+ <td style="text-align: center;">0.832</td>
609
+ </tr>
610
+ <tr>
611
+ <td style="text-align: left;">flowaicom/Flow-Judge-v0.1</td>
612
+ <td style="text-align: center;"><u>0.826</u></td>
613
+ <td style="text-align: center;">0.895</td>
614
+ <td style="text-align: center;"><u>0.859</u></td>
615
+ <td style="text-align: center;">0.854</td>
616
+ <td style="text-align: center;"><strong>0.767</strong></td>
617
+ <td style="text-align: center;">0.877</td>
618
+ <td style="text-align: center;"><strong>0.818</strong></td>
619
+ <td style="text-align: center;">0.807</td>
620
+ <td style="text-align: center;"><strong>0.874</strong></td>
621
+ <td style="text-align: center;">0.624</td>
622
+ <td style="text-align: center;">0.728</td>
623
+ <td style="text-align: center;">0.767</td>
624
+ </tr>
625
+ <tr>
626
+ <td style="text-align: left;">gpt-4o*</td>
627
+ <td style="text-align: center;">-</td>
628
+ <td style="text-align: center;">-</td>
629
+ <td style="text-align: center;">-</td>
630
+ <td style="text-align: center;">0.879</td>
631
+ <td style="text-align: center;">-</td>
632
+ <td style="text-align: center;">-</td>
633
+ <td style="text-align: center;">-</td>
634
+ <td style="text-align: center;">0.821</td>
635
+ <td style="text-align: center;">-</td>
636
+ <td style="text-align: center;">-</td>
637
+ <td style="text-align: center;">-</td>
638
+ <td style="text-align: center;">0.821</td>
639
+ </tr>
640
+ <tr>
641
+ <td style="text-align: left;">Claude 3 Sonnet*</td>
642
+ <td style="text-align: center;">-</td>
643
+ <td style="text-align: center;">-</td>
644
+ <td style="text-align: center;">-</td>
645
+ <td style="text-align: center;">0.845</td>
646
+ <td style="text-align: center;">-</td>
647
+ <td style="text-align: center;">-</td>
648
+ <td style="text-align: center;">-</td>
649
+ <td style="text-align: center;">0.829</td>
650
+ <td style="text-align: center;">-</td>
651
+ <td style="text-align: center;">-</td>
652
+ <td style="text-align: center;">-</td>
653
+ <td style="text-align: center;">0.829</td>
654
+ </tr>
655
+ <tr>
656
+ <td style="text-align: left;">RAGAS Faithfulness*</td>
657
+ <td style="text-align: center;">-</td>
658
+ <td style="text-align: center;">-</td>
659
+ <td style="text-align: center;">-</td>
660
+ <td style="text-align: center;">0.706</td>
661
+ <td style="text-align: center;">-</td>
662
+ <td style="text-align: center;">-</td>
663
+ <td style="text-align: center;">-</td>
664
+ <td style="text-align: center;">0.750</td>
665
+ <td style="text-align: center;">-</td>
666
+ <td style="text-align: center;">-</td>
667
+ <td style="text-align: center;">-</td>
668
+ <td style="text-align: center;">0.669</td>
669
+ </tr>
670
+ <tr>
671
+ <td style="text-align: left;">Lynx 8B*</td>
672
+ <td style="text-align: center;">-</td>
673
+ <td style="text-align: center;">-</td>
674
+ <td style="text-align: center;">-</td>
675
+ <td style="text-align: center;">0.857</td>
676
+ <td style="text-align: center;">-</td>
677
+ <td style="text-align: center;">-</td>
678
+ <td style="text-align: center;">-</td>
679
+ <td style="text-align: center;"><u>0.963</u></td>
680
+ <td style="text-align: center;">-</td>
681
+ <td style="text-align: center;">-</td>
682
+ <td style="text-align: center;">-</td>
683
+ <td style="text-align: center;"><u>0.852</u></td>
684
+ </tr>
685
+ <tr>
686
+ <td style="text-align: left;">Lynx 70B*</td>
687
+ <td style="text-align: center;">-</td>
688
+ <td style="text-align: center;">-</td>
689
+ <td style="text-align: center;">-</td>
690
+ <td style="text-align: center;"><strong>0.884</strong></td>
691
+ <td style="text-align: center;">-</td>
692
+ <td style="text-align: center;">-</td>
693
+ <td style="text-align: center;">-</td>
694
+ <td style="text-align: center;"><strong>0.975</strong></td>
695
+ <td style="text-align: center;">-</td>
696
+ <td style="text-align: center;">-</td>
697
+ <td style="text-align: center;">-</td>
698
+ <td style="text-align: center;"><strong>0.904</strong></td>
699
+ </tr>
700
+ </tbody>
701
+ </table>
702
+
703
+ \* _reported in lynx paper_
704
+ ### Feedback Bench
705
+
706
+ <table border="1" cellpadding="10" cellspacing="0" style="border-collapse: collapse; width: auto;">
707
+ <tr>
708
+ <th rowspan="2">Evaluator</th>
709
+ <th colspan="3" style="text-align:center;">Feedback bench</th>
710
+ </tr>
711
+ <tr>
712
+ <th style="text-align:center;">pearsonr</th>
713
+ <th style="text-align:center;">spearmanr</th>
714
+ <th style="text-align:center;">kendall-tau</th>
715
+ </tr>
716
+ <tr>
717
+ <td>microsoft/Phi-3.5-mini-instruct</td>
718
+ <td style="text-align:center;">0.710</td>
719
+ <td style="text-align:center;">0.721</td>
720
+ <td style="text-align:center;">0.622</td>
721
+ </tr>
722
+ <tr>
723
+ <td>prometheus-eval/prometheus-7b-v2.0*</td>
724
+ <td style="text-align:center;"><strong>0.878</strong></td>
725
+ <td style="text-align:center;"><strong>0.909</strong></td>
726
+ <td style="text-align:center;"><strong>0.773</strong></td>
727
+ </tr>
728
+ <tr>
729
+ <td>meta-llama/Meta-Llama-3.1-8B-Instruct</td>
730
+ <td style="text-align:center;">0.742</td>
731
+ <td style="text-align:center;">0.749</td>
732
+ <td style="text-align:center;">0.654</td>
733
+ </tr>
734
+ <tr>
735
+ <td>mistralai/Mistral-Nemo-Instruct-2407</td>
736
+ <td style="text-align:center;">0.720</td>
737
+ <td style="text-align:center;">0.724</td>
738
+ <td style="text-align:center;">0.632</td>
739
+ </tr>
740
+ <tr>
741
+ <td>gpt-4o-mini</td>
742
+ <td style="text-align:center;">0.797</td>
743
+ <td style="text-align:center;">0.795</td>
744
+ <td style="text-align:center;">0.701</td>
745
+ </tr>
746
+ <tr>
747
+ <td>flowaicom/Flow-Judge-v0.1</td>
748
+ <td style="text-align:center;"><u>0.787</u></td>
749
+ <td style="text-align:center;"><u>0.789</u></td>
750
+ <td style="text-align:center;"><u>0.688</u></td>
751
+ </tr>
752
+ </table>
753
+
754
+ \* _reported in prometheus paper using reference answer. Note the rest of the models have been evaluated without reference answer_