Datasets:

Languages:
English
Size Categories:
1K<n<10K
ArXiv:
Tags:
License:
question
stringlengths
34
139
related_work_paragraph
stringlengths
336
2.16k
cited_papers
listlengths
3
12
cited_paragraphs
listlengths
180
1.39k
"What are the prior approaches proposed to improve faithfulness of the reasoning steps generated by (...TRUNCATED)
"However, the generated steps could be unfaithful (Wei et al., 2022b; Zelikman et al., 2022; Creswel(...TRUNCATED)
[{"corpusId":252355543,"title":"Dynamic Generation of Interpretable Inference Rules in a Neuro-Symbo(...TRUNCATED)
[{"corpusId":252355543,"title":"Dynamic Generation of Interpretable Inference Rules in a Neuro-Symbo(...TRUNCATED)
What are the different approaches proposed for query expansion and reformulation in NLP?
"Query Expansion and Reformulation Traditionally, query expansion methods based on pseudo relevance (...TRUNCATED)
[{"corpusId":252872857,"title":"Query Expansion Using Contextual Clue Sampling with Language Models"(...TRUNCATED)
[{"corpusId":252872857,"title":"Query Expansion Using Contextual Clue Sampling with Language Models"(...TRUNCATED)
What are contrastive explanation and how has it been applied to solve NLP tasks?
"Contrastive explanations clarify why an event occurred in contrast to another, which are inherently(...TRUNCATED)
[{"corpusId":234797436,"title":"Prompting Contrastive Explanations for Commonsense Reasoning Tasks",(...TRUNCATED)
[{"corpusId":234797436,"title":"Prompting Contrastive Explanations for Commonsense Reasoning Tasks",(...TRUNCATED)
How do language models leverage parametric and non-parametric knowledge?
"Parametric and non-parametric knowledge. Petroni et al. (2019) demonstrate that large pre-trained L(...TRUNCATED)
[{"corpusId":254220735,"title":"Nonparametric Masked Language Modeling","abstract":"Existing languag(...TRUNCATED)
[{"corpusId":254220735,"title":"Nonparametric Masked Language Modeling","text":"Current large langua(...TRUNCATED)
What are the various methods proposed to retrieve demonstrations for in-context learning?
"In this section, we introduce previous demonstration retrievers for in-context learning, and explai(...TRUNCATED)
[{"corpusId":258557751,"title":"Unified Demonstration Retriever for In-Context Learning","abstract":(...TRUNCATED)
[{"corpusId":258557751,"title":"Unified Demonstration Retriever for In-Context Learning","text":"Lar(...TRUNCATED)
How do prior work attempt to understand in-context learning and what are their findings?
"A number of empirical studies have attempted to characterize the behavior of ICL and explain why it(...TRUNCATED)
[{"corpusId":241035330,"title":"An Explanation of In-context Learning as Implicit Bayesian Inference(...TRUNCATED)
[{"corpusId":241035330,"title":"An Explanation of In-context Learning as Implicit Bayesian Inference(...TRUNCATED)
What are the existing approaches for zero-shot commonsense reasoning?
"Zero-shot commonsense reasoning. We categorize zero-shot approaches for commonsense reasoning into (...TRUNCATED)
[{"corpusId":241583516,"title":"Think Before You Speak: Learning to Generate Implicit Knowledge for (...TRUNCATED)
[{"corpusId":241583516,"title":"Think Before You Speak: Learning to Generate Implicit Knowledge for (...TRUNCATED)
"What are the different methods which models intermediate explanations as latent variables when solv(...TRUNCATED)
"Explanations as latent variables. Modeling intermediate explanations as latent variables is a commo(...TRUNCATED)
[{"corpusId":237396261,"title":"Probing Commonsense Explanation in Dialogue Response Generation","ab(...TRUNCATED)
[{"corpusId":237396261,"title":"Probing Commonsense Explanation in Dialogue Response Generation","te(...TRUNCATED)
What are the different approaches for extending existing LLMs to languages with limited resources?
"Recent research has attempted to extend existing LLMs to languages with limited resources. Wang et (...TRUNCATED)
[{"corpusId":235352637,"title":"How to Adapt Your Pretrained Multilingual Model to 1600 Languages","(...TRUNCATED)
[{"corpusId":235352637,"title":"How to Adapt Your Pretrained Multilingual Model to 1600 Languages","(...TRUNCATED)
What techniques have been proposed to pretrain graph neural networks?
"6.3 Pretraining on Graphs\nInspired by the recent success of pretrained lan-\nguage models, researc(...TRUNCATED)
[{"corpusId":220250007,"title":"GPT-GNN: Generative Pre-Training of Graph Neural Networks","abstract(...TRUNCATED)
[{"corpusId":220250007,"title":"GPT-GNN: Generative Pre-Training of Graph Neural Networks","text":"T(...TRUNCATED)

Dataset Card for Long-form-sci-qa

Dataset Summary

This dataset contains the question and document pairs annotated for 🥝KIWI.

Languages

The dataset contains data in English.

Dataset Structure

Data Instances

Each instance is a question, paired with the related work paragraph for which the question is written and a set of relevant papers (cited in the related work paragraph).

Data Fields

Each instance contains the following fields:

  • question: the input question q
  • related_work_paragraph: the related work paragraph that the annotator wrote the question for.
  • cited_papers: The list of papers that are relevant to the question.
  • cited_paragraphs: The list of extracted paragraphs from the cited papers.

Dataset Creation

Please refer to our paper (Section 3.1) for details on annotation process and discussion on limitations.

Additional Information

Please checkout this dataset for the interaction data collected.

Licensing Information

https://creativecommons.org/licenses/by-sa/4.0/legalcode

Citation Information

@article{xu2024kiwi,
  title     = {KIWI: A Dataset of Knowledge-Intensive Writing Instructions for Answering Research Questions},
  author    = {Xu, Fangyuan and Lo, Kyle and Kuehl, Bailey and Soldaini, Luca and Choi, Eunsol and Wadden, David},
  year      = 2024,
}
Downloads last month
34
Edit dataset card