MoritzLaurer HF staff commited on
Commit
417d7df
·
verified ·
1 Parent(s): 5947d55

Upload prompt template grounding_accuracy_response_level.yaml

Browse files
grounding_accuracy_response_level.yaml ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ prompt:
2
+ template: "Your task is to check if the Response is accurate to the Evidence.\nGenerate 'Accurate' if the Response is accurate
3
+ when verified according to the Evidence, or 'Inaccurate' if the Response is inaccurate (contradicts the evidence) or cannot
4
+ be verified.\n\n**Query**:\n\n{{user_request}}\n\n**End of Query**\n\n**Evidence**\n\n{{context_document}}\n\n**End of
5
+ Evidence**\n\n**Response**:\n\n{{response}}\n\n**End of Response**\n\nLet's think step-by-step."
6
+ template_variables:
7
+ - user_request
8
+ - context_document
9
+ - response
10
+ metadata:
11
+ description: "An evaluation prompt from the paper 'The FACTS Grounding Leaderboard: Benchmarking LLMs’ Ability to Ground
12
+ Responses to Long-Form Input' by Google DeepMind.\n The prompt was copied from the evaluation_prompts.csv file from
13
+ Kaggle.\n This specific prompt elicits a binary accurate/inaccurate classifier for the entire response."
14
+ evaluation_method: response_level
15
+ tags:
16
+ - fact-checking
17
+ version: 1.0.0
18
+ author: Google DeepMind
19
+ source: https://www.kaggle.com/datasets/deepmind/FACTS-grounding-examples?resource=download&select=evaluation_prompts.csv
20
+ client_parameters: {}
21
+ custom_data: {}