Upload evaluation_prompts.csv
Browse files- evaluation_prompts.csv +282 -0
evaluation_prompts.csv
ADDED
@@ -0,0 +1,282 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
evaluation_method,evaluation_prompt
|
2 |
+
response_level,"Your task is to check if the Response is accurate to the Evidence.
|
3 |
+
Generate 'Accurate' if the Response is accurate when verified according to the Evidence, or 'Inaccurate' if the Response is inaccurate (contradicts the evidence) or cannot be verified.
|
4 |
+
|
5 |
+
**Query**:\n\n{{user_request}}\n\n**End of Query**\n
|
6 |
+
**Evidence**\n\n{{context_document}}\n\n**End of Evidence**\n
|
7 |
+
**Response**:\n\n{{response}}\n\n**End of Response**\n
|
8 |
+
Let's think step-by-step."
|
9 |
+
json_alt,"You are a helpful and harmless AI assistant. You will be provided with a textual context and a model-generated response.
|
10 |
+
Your task is to analyze the response sentence by sentence and classify each sentence according to its relationship with the provided context.
|
11 |
+
|
12 |
+
**Instructions:**
|
13 |
+
|
14 |
+
1. **Decompose the response into individual sentences.**
|
15 |
+
2. **For each sentence, assign one of the following labels:**
|
16 |
+
* **`supported`**: The sentence is entailed by the given context. Provide a supporting excerpt from the context.
|
17 |
+
* **`unsupported`**: The sentence is not entailed by the given context. Provide an excerpt that is close but does not fully support the sentence.
|
18 |
+
* **`contradictory`**: The sentence is falsified by the given context. Provide a contradicting excerpt from the context.
|
19 |
+
* **`no_rad`**: The sentence does not require factual attribution (e.g., opinions, greetings, questions, disclaimers). No excerpt is needed for this label.
|
20 |
+
|
21 |
+
3. **For each label, provide a short rationale explaining your decision.** The rationale should be separate from the excerpt.
|
22 |
+
|
23 |
+
**Input Format:**
|
24 |
+
|
25 |
+
The input will consist of two parts, clearly separated:
|
26 |
+
|
27 |
+
* **Context:** The textual context used to generate the response.
|
28 |
+
* **Response:** The model-generated response to be analyzed.
|
29 |
+
|
30 |
+
**Output Format:**
|
31 |
+
|
32 |
+
For each sentence in the response, output a JSON object with the following fields:
|
33 |
+
|
34 |
+
* `""sentence""`: The sentence being analyzed.
|
35 |
+
* `""label""`: One of `supported`, `unsupported`, `contradictory`, or `no_rad`.
|
36 |
+
* `""rationale""`: A brief explanation for the assigned label.
|
37 |
+
* `""excerpt""`: A relevant excerpt from the context. Only required for `supported`, `unsupported`, and `contradictory` labels.
|
38 |
+
|
39 |
+
Output each JSON object on a new line.
|
40 |
+
|
41 |
+
**Example:**
|
42 |
+
|
43 |
+
**Input:**
|
44 |
+
|
45 |
+
```
|
46 |
+
Context: Apples are red fruits. Bananas are yellow fruits.
|
47 |
+
|
48 |
+
Response: Apples are red. Bananas are green. Enjoy your fruit!
|
49 |
+
```
|
50 |
+
|
51 |
+
**Output:**
|
52 |
+
|
53 |
+
{""sentence"": ""Apples are red."", ""label"": ""supported"", ""rationale"": ""The context explicitly states that apples are red."", ""excerpt"": ""Apples are red fruits.""}
|
54 |
+
{""sentence"": ""Bananas are green."", ""label"": ""contradictory"", ""rationale"": ""The context states that bananas are yellow, not green."", ""excerpt"": ""Bananas are yellow fruits.""}
|
55 |
+
{""sentence"": ""Enjoy your fruit!"", ""label"": ""no_rad"", ""rationale"": ""This is a general expression and does not require factual attribution."", ""excerpt"": null}
|
56 |
+
|
57 |
+
**Now, please analyze the following context and response:**
|
58 |
+
|
59 |
+
**User Query:**
|
60 |
+
{{user_request}}
|
61 |
+
|
62 |
+
**Context:**
|
63 |
+
{{context_document}}
|
64 |
+
|
65 |
+
**Response:**
|
66 |
+
{{response}}"
|
67 |
+
json,"You are a helpful and harmless AI assistant. You will be provided with a textual context and a model-generated response.
|
68 |
+
Your task is to analyze the response sentence by sentence and classify each sentence according to its relationship with the provided context.
|
69 |
+
|
70 |
+
**Instructions:**
|
71 |
+
|
72 |
+
1. **Decompose the response into individual sentences.**
|
73 |
+
2. **For each sentence, assign one of the following labels:**
|
74 |
+
* **`supported`**: The sentence is entailed by the given context. Provide a supporting excerpt from the context. The supporting except must *fully* entail the sentence. If you need to cite multiple supporting excepts, simply concatenate them.
|
75 |
+
* **`unsupported`**: The sentence is not entailed by the given context. No excerpt is needed for this label.
|
76 |
+
* **`contradictory`**: The sentence is falsified by the given context. Provide a contradicting excerpt from the context.
|
77 |
+
* **`no_rad`**: The sentence does not require factual attribution (e.g., opinions, greetings, questions, disclaimers). No excerpt is needed for this label.
|
78 |
+
3. **For each label, provide a short rationale explaining your decision.** The rationale should be separate from the excerpt.
|
79 |
+
4. **Be very strict with your `supported` and `contradictory` decisions.** Unless you can find straightforward, indisputable evidence excerpts *in the context* that a sentence is `supported` or `contradictory`, consider it `unsupported`. You should not employ world knowledge unless it is truly trivial.
|
80 |
+
|
81 |
+
**Input Format:**
|
82 |
+
|
83 |
+
The input will consist of two parts, clearly separated:
|
84 |
+
|
85 |
+
* **Context:** The textual context used to generate the response.
|
86 |
+
* **Response:** The model-generated response to be analyzed.
|
87 |
+
|
88 |
+
**Output Format:**
|
89 |
+
|
90 |
+
For each sentence in the response, output a JSON object with the following fields:
|
91 |
+
|
92 |
+
* `""sentence""`: The sentence being analyzed.
|
93 |
+
* `""label""`: One of `supported`, `unsupported`, `contradictory`, or `no_rad`.
|
94 |
+
* `""rationale""`: A brief explanation for the assigned label.
|
95 |
+
* `""excerpt""`: A relevant excerpt from the context. Only required for `supported` and `contradictory` labels.
|
96 |
+
|
97 |
+
Output each JSON object on a new line.
|
98 |
+
|
99 |
+
**Example:**
|
100 |
+
|
101 |
+
**Input:**
|
102 |
+
|
103 |
+
```
|
104 |
+
Context: Apples are red fruits. Bananas are yellow fruits.
|
105 |
+
|
106 |
+
Response: Apples are red. Bananas are green. Bananas are cheaper than apples. Enjoy your fruit!
|
107 |
+
```
|
108 |
+
|
109 |
+
**Output:**
|
110 |
+
|
111 |
+
{""sentence"": ""Apples are red."", ""label"": ""supported"", ""rationale"": ""The context explicitly states that apples are red."", ""excerpt"": ""Apples are red fruits.""}
|
112 |
+
{""sentence"": ""Bananas are green."", ""label"": ""contradictory"", ""rationale"": ""The context states that bananas are yellow, not green."", ""excerpt"": ""Bananas are yellow fruits.""}
|
113 |
+
{""sentence"": ""Bananas are cheaper than apples."", ""label"": ""unsupported"", ""rationale"": ""The context does not mention the price of bananas or apples."", ""excerpt"": null}
|
114 |
+
{""sentence"": ""Enjoy your fruit!"", ""label"": ""no_rad"", ""rationale"": ""This is a general expression and does not require factual attribution."", ""excerpt"": null}
|
115 |
+
|
116 |
+
**Now, please analyze the following context and response:**
|
117 |
+
|
118 |
+
**User Query:**
|
119 |
+
{{user_request}}
|
120 |
+
|
121 |
+
**Context:**
|
122 |
+
{{context_document}}
|
123 |
+
|
124 |
+
**Response:**
|
125 |
+
{{response}}"
|
126 |
+
json_with_double_check,"Your task is to verify whether a given sentence is entailed by a given context or not. Answer only in YES or NO without any additional text. Do not try to avoid answering, or apologize, or give any answer that isn't simply YES or NO.
|
127 |
+
|
128 |
+
**Sentence**
|
129 |
+
{{json_dict[""sentence""]}}
|
130 |
+
|
131 |
+
**Context**
|
132 |
+
{{json_dict[""excerpt""]}}"
|
133 |
+
span_level,"Your task is to check if a specific Span is accurate to the Evidence.
|
134 |
+
Generate 'Accurate' if the Span is accurate when verified according to the Evidence or when there is nothing to verify in the Span.
|
135 |
+
Generate 'Inaccurate' if the Span is inaccurate (contradicts the evidence), or cannot be verified.
|
136 |
+
|
137 |
+
**Query**:\n\n{{user_request}}\n\n**End of Query**\n
|
138 |
+
**Evidence**\n\n{{context_document}}\n\n**End of Evidence**\n
|
139 |
+
**Response**:\n\n{{response}}\n\n**End of Response**\n
|
140 |
+
|
141 |
+
You are currently verifying **Span {{ix+1}}** from the Response.
|
142 |
+
**Span {{ix+1}}**:\n\n{{span}}\n\n**End of Span {{ix+1}}**\n
|
143 |
+
|
144 |
+
Is Span {{ix+1}} accurate or inaccurate when verified according to the Evidence? Point to where in the evidence justifies your answer."
|
145 |
+
implicit_span_level,"Your task is to check if the Response is accurate to the Evidence.
|
146 |
+
Generate 'Accurate' if the Response is accurate when verified according to the Evidence, or 'Inaccurate' if the Response is inaccurate (contradicts the evidence) or cannot be verified.
|
147 |
+
|
148 |
+
**Query**:\n\n{{user_request}}\n\n**End of Query**\n
|
149 |
+
**Evidence**\n\n{{context_document}}\n\n**End of Evidence**\n
|
150 |
+
**Response**:\n\n{{response}}\n\n**End of Response**\n
|
151 |
+
|
152 |
+
Break down the Response into sentences and classify each one separately, then give the final answer: If even one of the sentences is inaccurate, then the Response is inaccurate.
|
153 |
+
|
154 |
+
For example, your output should be of this format:
|
155 |
+
Sentence 1: <Sentence 1>
|
156 |
+
Sentence 1 label: Accurate/Inaccurate (choose 1)
|
157 |
+
Sentence 2: <Sentence 2>
|
158 |
+
Sentence 2 label: Accurate/Inaccurate (choose 1)
|
159 |
+
Sentence 3: <Sentence 3>
|
160 |
+
Sentence 3 label: Accurate/Inaccurate (choose 1)
|
161 |
+
[...]
|
162 |
+
Final Answer: Accurate/Inaccurate (choose 1)"
|
163 |
+
ineligible_responses_filter_with_context,"Your mission is to judge the response from an AI model, the *test* response, calibrating your judgement using a *baseline* response.
|
164 |
+
Please use the following rubric criteria to judge the responses:
|
165 |
+
|
166 |
+
<START OF RUBRICS>
|
167 |
+
Your task is to analyze the test response based on the criterion of ""Instruction Following"". Start your analysis with ""Analysis"".
|
168 |
+
|
169 |
+
**Instruction Following**
|
170 |
+
Please first list the instructions in the user query.
|
171 |
+
In general, an instruction is VERY important if it is specifically asked for in the prompt and deviates from the norm. Please highlight such specific keywords.
|
172 |
+
You should also derive the task type from the user query and include the task-specific implied instructions.
|
173 |
+
Sometimes, no instruction is available in the user query.
|
174 |
+
It is your job to infer if the instruction is to autocomplete the user query or is asking the LLM for follow-ups.
|
175 |
+
After listing the instructions, you should rank them in order of importance.
|
176 |
+
After that, INDEPENDENTLY check if the test response and the baseline response meet each of the instructions.
|
177 |
+
You should itemize, for each instruction, whether the response meets, partially meets, or does not meet the requirement, using reasoning.
|
178 |
+
You should start reasoning first before reaching a conclusion about whether the response satisfies the requirement.
|
179 |
+
Citing examples while reasoning is preferred.
|
180 |
+
|
181 |
+
Reflect on your answer and consider the possibility that you are wrong.
|
182 |
+
If you are wrong, explain clearly what needs to be clarified, improved, or changed in the rubric criteria and guidelines.
|
183 |
+
|
184 |
+
In the end, express your final verdict as one of the following three json objects:
|
185 |
+
|
186 |
+
```json
|
187 |
+
{{
|
188 |
+
""Instruction Following"": ""No Issues""
|
189 |
+
}}
|
190 |
+
```
|
191 |
+
|
192 |
+
```json
|
193 |
+
{{
|
194 |
+
""Instruction Following"": ""Minor Issue(s)""
|
195 |
+
}}
|
196 |
+
```
|
197 |
+
|
198 |
+
```json
|
199 |
+
{{
|
200 |
+
""Instruction Following"": ""Major Issue(s)""
|
201 |
+
}}
|
202 |
+
```
|
203 |
+
|
204 |
+
<END OF RUBRICS>
|
205 |
+
|
206 |
+
# Your task
|
207 |
+
## User query
|
208 |
+
<|begin_of_query|>
|
209 |
+
{{full_prompt}}
|
210 |
+
<|end_of_query|>
|
211 |
+
|
212 |
+
## Test Response:
|
213 |
+
<|begin_of_test_response|>
|
214 |
+
{{response_a}}
|
215 |
+
<|end_of_test_response|>
|
216 |
+
|
217 |
+
## Baseline Response:
|
218 |
+
<|begin_of_baseline_response|>
|
219 |
+
{{response_b}}
|
220 |
+
<|end_of_baseline_response|>
|
221 |
+
|
222 |
+
Please write your analysis and final verdict for the test response."
|
223 |
+
ineligible_responses_filter_no_context,"Your mission is to judge the response from an AI model, the *test* response, calibrating your judgement using a *baseline* response.
|
224 |
+
Please use the following rubric criteria to judge the responses:
|
225 |
+
|
226 |
+
<START OF RUBRICS>
|
227 |
+
Your task is to analyze the test response based on the criterion of ""Instruction Following"". Start your analysis with ""Analysis"".
|
228 |
+
|
229 |
+
**Instruction Following**
|
230 |
+
Please first list the instructions in the user query.
|
231 |
+
In general, an instruction is VERY important if it is specifically asked for in the prompt and deviates from the norm. Please highlight such specific keywords.
|
232 |
+
You should also derive the task type from the user query and include the task-specific implied instructions.
|
233 |
+
Sometimes, no instruction is available in the user query.
|
234 |
+
It is your job to infer if the instruction is to autocomplete the user query or is asking the LLM for follow-ups.
|
235 |
+
After listing the instructions, you should rank them in order of importance.
|
236 |
+
After that, INDEPENDENTLY check if the test response and the baseline response meet each of the instructions.
|
237 |
+
You should itemize, for each instruction, whether the response meets, partially meets, or does not meet the requirement, using reasoning.
|
238 |
+
You should start reasoning first before reaching a conclusion about whether the response satisfies the requirement.
|
239 |
+
Citing examples while reasoning is preferred.
|
240 |
+
|
241 |
+
Reflect on your answer and consider the possibility that you are wrong.
|
242 |
+
If you are wrong, explain clearly what needs to be clarified, improved, or changed in the rubric criteria and guidelines.
|
243 |
+
|
244 |
+
In the end, express your final verdict as one of the following three json objects:
|
245 |
+
|
246 |
+
```json
|
247 |
+
{{
|
248 |
+
""Instruction Following"": ""No Issues""
|
249 |
+
}}
|
250 |
+
```
|
251 |
+
|
252 |
+
```json
|
253 |
+
{{
|
254 |
+
""Instruction Following"": ""Minor Issue(s)""
|
255 |
+
}}
|
256 |
+
```
|
257 |
+
|
258 |
+
```json
|
259 |
+
{{
|
260 |
+
""Instruction Following"": ""Major Issue(s)""
|
261 |
+
}}
|
262 |
+
```
|
263 |
+
|
264 |
+
<END OF RUBRICS>
|
265 |
+
|
266 |
+
# Your task
|
267 |
+
## User query
|
268 |
+
<|begin_of_query|>
|
269 |
+
{{user_request}}
|
270 |
+
<|end_of_query|>
|
271 |
+
|
272 |
+
## Test Response:
|
273 |
+
<|begin_of_test_response|>
|
274 |
+
{{response_a}}
|
275 |
+
<|end_of_test_response|>
|
276 |
+
|
277 |
+
## Baseline Response:
|
278 |
+
<|begin_of_baseline_response|>
|
279 |
+
{{response_b}}
|
280 |
+
<|end_of_baseline_response|>
|
281 |
+
|
282 |
+
Please write your analysis and final verdict for the test response."
|