Datasets:

Modalities:
Text
ArXiv:
License:
nxphi47 commited on
Commit
9d3ea59
·
verified ·
1 Parent(s): 5349a93

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -9
README.md CHANGED
@@ -262,13 +262,35 @@ configs:
262
  path: truthfulqa/validation-*
263
  ---
264
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
265
  The dataset can be loaded using the command
266
- ```
267
  task = "hotpotqa" # it can be any other option like triviaqa,popqa,2wiki, MuSiQue, NaturalQuestions etc.
268
- load_dataset("Salesforce/ContextualBench",task,split="validation")
269
  ```
270
 
271
- ## 2WikiHotpotQA
 
 
272
 
273
  This dataset is a multihop question answering task, as proposed in "Constructing A Multi-hop QA Dataset for Comprehensive Evaluation of Reasoning Steps" by Ho. et. al
274
  The folder contains evaluation script and path to dataset on the validation split on around 12k samples.
@@ -289,7 +311,7 @@ The folder contains evaluation script and path to dataset on the validation spli
289
  }
290
  ```
291
 
292
- ## HotpotQA
293
 
294
  HotpotQA is a Wikipedia-based question-answer pairs with the questions require finding and reasoning over multiple supporting documents to answer. We evaluate on 7405 datapoints, on the distractor setting. This dataset was proposed in the below paper
295
  ```
@@ -301,7 +323,7 @@ HotpotQA is a Wikipedia-based question-answer pairs with the questions require f
301
  }
302
  ```
303
 
304
- ## MuSiQue
305
 
306
  This dataset is a multihop question answering task, that requires 2-4 hop in every questions, making it slightly harder task when compared to other multihop tasks.This dataset was proposed in the below paper
307
 
@@ -315,7 +337,7 @@ This dataset is a multihop question answering task, that requires 2-4 hop in eve
315
  }
316
  ```
317
 
318
- ## NaturalQuestions
319
 
320
  The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question
321
 
@@ -328,7 +350,7 @@ journal = {Transactions of the Association of Computational Linguistics}
328
  }
329
  ```
330
 
331
- ## PopQA
332
  PopQA is a large-scale open-domain question answering (QA) dataset, the long-tail subset, consisting of 1,399 rare entity queries whose monthly Wikipedia page views are less than 100
333
 
334
  Make sure to cite the work
@@ -341,7 +363,7 @@ Make sure to cite the work
341
  }
342
  ```
343
 
344
- ## TriviaQA
345
 
346
  TriviaqQA is a reading comprehension dataset containing question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions.
347
  ```
@@ -358,7 +380,7 @@ archivePrefix = {arXiv},
358
  }
359
  ```
360
 
361
- ## TruthfulQA
362
 
363
  TruthfulQA is a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. Questions are crafted so that some humans would answer falsely due to a false belief or misconception. To perform well, models must avoid generating false answers learned from imitating human texts.
364
 
 
262
  path: truthfulqa/validation-*
263
  ---
264
 
265
+
266
+
267
+ # ContextualBench - A comprehensive toolkit to evaluate LM on different Contextual datasets
268
+
269
+ Evaluation Code: [SalesforceAIResearch/SFR-RAG](https://github.com/SalesforceAIResearch/SFR-RAG)
270
+
271
+ ## Description
272
+
273
+ ContextualBench is a powerful evaluation framework designed to assess the performance of Large Language Models (LLMs) on contextual datasets. It provides a flexible pipeline for evaluating various LLM families across different tasks, with a focus on handling large context inputs.
274
+
275
+ > Each individual evaluation dataset in ContextualBench is licensed separately and must be adhered by a user.
276
+
277
+
278
+ ## Features
279
+
280
+ * Dynamic Retrieval Support: Efficiently handles large context inputs, allowing for comprehensive evaluation of LLMs' contextual understanding capabilities.
281
+ * Extensive Evaluation Dataset: Supports 7 contextual tasks, including: Question Answering (QA), Multi-Hop Question Answering, Classification tasks
282
+ * Multi-LLM Family Support: Compatible with a wide range of LLM families, including: Hugging Face models, Gemma, Mistral, OpenAI, Cohere.
283
+
284
+
285
  The dataset can be loaded using the command
286
+ ```python
287
  task = "hotpotqa" # it can be any other option like triviaqa,popqa,2wiki, MuSiQue, NaturalQuestions etc.
288
+ load_dataset("Salesforce/ContextualBench", task, split="validation")
289
  ```
290
 
291
+ ## Component Datasets
292
+
293
+ ### 2WikiHotpotQA
294
 
295
  This dataset is a multihop question answering task, as proposed in "Constructing A Multi-hop QA Dataset for Comprehensive Evaluation of Reasoning Steps" by Ho. et. al
296
  The folder contains evaluation script and path to dataset on the validation split on around 12k samples.
 
311
  }
312
  ```
313
 
314
+ ### HotpotQA
315
 
316
  HotpotQA is a Wikipedia-based question-answer pairs with the questions require finding and reasoning over multiple supporting documents to answer. We evaluate on 7405 datapoints, on the distractor setting. This dataset was proposed in the below paper
317
  ```
 
323
  }
324
  ```
325
 
326
+ ### MuSiQue
327
 
328
  This dataset is a multihop question answering task, that requires 2-4 hop in every questions, making it slightly harder task when compared to other multihop tasks.This dataset was proposed in the below paper
329
 
 
337
  }
338
  ```
339
 
340
+ ### NaturalQuestions
341
 
342
  The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question
343
 
 
350
  }
351
  ```
352
 
353
+ ### PopQA
354
  PopQA is a large-scale open-domain question answering (QA) dataset, the long-tail subset, consisting of 1,399 rare entity queries whose monthly Wikipedia page views are less than 100
355
 
356
  Make sure to cite the work
 
363
  }
364
  ```
365
 
366
+ ### TriviaQA
367
 
368
  TriviaqQA is a reading comprehension dataset containing question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions.
369
  ```
 
380
  }
381
  ```
382
 
383
+ ### TruthfulQA
384
 
385
  TruthfulQA is a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. Questions are crafted so that some humans would answer falsely due to a false belief or misconception. To perform well, models must avoid generating false answers learned from imitating human texts.
386