geolocal commited on
Commit
c4bbbc6
·
verified ·
1 Parent(s): f54afe6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -1
README.md CHANGED
@@ -11,4 +11,51 @@ tags:
11
  pretty_name: FACTS Grounding Public Examples
12
  size_categories:
13
  - n<1K
14
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  pretty_name: FACTS Grounding Public Examples
12
  size_categories:
13
  - n<1K
14
+ ---
15
+ # FACTS Grounding 1.0 Public Examples
16
+ #### 860 public FACTS Grounding examples from Google DeepMind and Google Research
17
+
18
+ FACTS Grounding is a benchmark from Google DeepMind and Google Research designed to measure the performance of AI Models on factuality and grounding.
19
+
20
+ ▶ [FACTS Grounding Leaderboard on Kaggle](https://www.kaggle.com/facts-leaderboard)\
21
+ ▶ [Technical Report](https://storage.googleapis.com/deepmind-media/FACTS/FACTS_grounding_paper.pdf)\
22
+ ▶ [Evaluation Starter Code](https://www.kaggle.com/code/andrewmingwang/facts-grounding-benchmark-starter-code)\
23
+ ▶ [Google DeepMind Blog Post](https://deepmind.google/discover/blog/facts-grounding-a-new-benchmark-for-evaluating-the-factuality-of-large-language-models)
24
+
25
+
26
+ ## Usage
27
+
28
+ The FACTS Grounding benchmark evaluates the ability of Large Language Models (LLMs) to generate factually accurate responses
29
+ grounded in provided long-form documents, encompassing a variety of domains. FACTS Grounding moves beyond simple factual
30
+ question-answering by assessing whether LLM responses are fully grounded to the provided context and correctly synthesize
31
+ information from a long context document. By providing a standardized evaluation framework, FACTS Grounding aims to
32
+ promote the development of LLMs that are both knowledgeable and trustworthy, facilitating their responsible deployment
33
+ in real-world applications.
34
+
35
+ ## Dataset Description
36
+
37
+ This dataset is a collection 860 examples (public set) crafted by humans for evaluating how well an AI system grounds their answers to a given context. Each example is composed of a few parts:
38
+
39
+ * A system prompt (`system_instruction`) which provides general instructions to the model, including to only answer the question provided based on the information in the given context
40
+ * A task (`user_request`) which includes the specific question(s) for the system to answer e.g. "*What are some tips on saving money?*"
41
+ * A long document (`context_document`) which includes information necessary to answer to question e.g. an SEC filing for a publicly traded US company
42
+
43
+ This dataset also contains evaluation prompts (`evaluation_prompts.csv`) for judging model generated responses to the examples. See the [Technical Report](https://storage.googleapis.com/deepmind-media/FACTS/FACTS_grounding_paper.pdf) for methodology details.
44
+
45
+ ## Limitations
46
+ While this benchmark represents a step forward in evaluating factual accuracy, more work remains to be done. First, this benchmark relies on potentially noisy automated LLM judge models for evaluation. By ensembling a range of frontier LLMs and averaging judge outputs, we attempt to mitigate this. Second, the FACTS benchmark focuses only on evaluating grounded responses to long-form text input and could potentially be extended.
47
+
48
+ Questions, comments, or issues? Share your thoughts with us in the [discussion forum](https://www.kaggle.com/facts-leaderboard/discussion).
49
+
50
+ ## Citation
51
+
52
+ If you use this dataset in your research, please cite our technical report:
53
+ ```
54
+ @misc{kaggle-FACTS-leaderboard,
55
+ author = {Alon Jacovi, Andrew Wang, Chris Alberti, Connie Tao, Jon Lipovetz, Kate Olszewska, Lukas Haas, Michelle Liu, Nate Keating, Adam Bloniarz, Carl Saroufim, Corey Fry, Dror Marcus, Doron Kukliansky, Gaurav Singh Tomar, James Swirhun, Jinwei Xing, Lily Wang, Michael Aaron, Moran Ambar, Rachana Fellinger, Rui Wang, Ryan Sims, Zizhao Zhang, Sasha Goldshtein, Yossi Matias, and Dipanjan Das},
56
+ title = {FACTS Leaderboard},
57
+ year = {2024},
58
+ howpublished = {\url{https://kaggle.com/facts-leaderboard}},
59
+ note = {Google DeepMind, Google Research, Google Cloud, Kaggle}
60
+ }
61
+ ```