glecorve's picture
Enriched README
ffd1156
metadata
dataset_info:
  features:
    - name: original_nl_question
      dtype: string
    - name: recased_nl_question
      dtype: string
    - name: sparql_query
      dtype: string
    - name: verbalized_sparql_query
      dtype: string
    - name: nl_subject
      dtype: string
    - name: nl_property
      dtype: string
    - name: nl_object
      dtype: string
    - name: nl_answer
      dtype: string
    - name: rdf_subject
      dtype: string
    - name: rdf_property
      dtype: string
    - name: rdf_object
      dtype: string
    - name: rdf_answer
      dtype: string
    - name: rdf_target
      dtype: string
  splits:
    - name: train
      num_bytes: 11403929
      num_examples: 34374
    - name: validation
      num_bytes: 1614051
      num_examples: 4867
    - name: test
      num_bytes: 3304281
      num_examples: 9961
  download_size: 7595264
  dataset_size: 16322261
task_categories:
  - question-answering
  - text-generation
tags:
  - qa
  - knowledge-graph
  - sparql
language:
  - en

Dataset Card for SimpleQuestions-SPARQLtoText

Table of Contents

Dataset Description

Dataset Summary

Special version of SimpleQuestions with SPARQL queries formatted for the SPARQL-to-Text task.

JSON fields

The original version of SimpleQuestions is a raw text file listing triples and the natural language question. A JSON version has been generated and augmented with the following fields:

  • rdf_subject, rdf_property, rdf_object: triple in the Wikidata format (IDs)

  • nl_subject, nl_property, nl_object: triple with labels retrieved from Wikidata. Some entities do not have labels, they are labelled as UNDEFINED_LABEL

  • sparql_query: SPARQL query with Wikidata IDs

  • verbalized_sparql_query: SPARQL query with labels

  • original_nl_question: original natural language question from SimpleQuestions. This is in lower case.

  • recased_nl_question: Version of original_nl_question where the named entities have been automatically recased based on the labels of the entities.

Format of the SPARQL queries

  • Randomizing the variables names

  • Delimiters are spaced

Answerable/unanswerable

Some questions in SimpleQuestions cannot be answered. Hence, it originally comes with 2 versions for the train/valid/test sets: one with all entries, another with the answerable questions only.

Languages

  • English

Dataset Structure

Types of questions

Comparison of question types compared to related datasets:

SimpleQuestions ParaQA LC-QuAD 2.0 CSQA WebNLQ-QA
Number of triplets in query 1 βœ“ βœ“ βœ“ βœ“ βœ“
2 βœ“ βœ“ βœ“ βœ“
More βœ“ βœ“ βœ“
Logical connector between triplets Conjunction βœ“ βœ“ βœ“ βœ“ βœ“
Disjunction βœ“ βœ“
Exclusion βœ“ βœ“
Topology of the query graph Direct βœ“ βœ“ βœ“ βœ“ βœ“
Sibling βœ“ βœ“ βœ“ βœ“
Chain βœ“ βœ“ βœ“ βœ“
Mixed βœ“ βœ“
Other βœ“ βœ“ βœ“ βœ“
Variable typing in the query None βœ“ βœ“ βœ“ βœ“ βœ“
Target variable βœ“ βœ“ βœ“ βœ“
Internal variable βœ“ βœ“ βœ“ βœ“
Comparisons clauses None βœ“ βœ“ βœ“ βœ“ βœ“
String βœ“ βœ“
Number βœ“ βœ“ βœ“
Date βœ“ βœ“
Superlative clauses No βœ“ βœ“ βœ“ βœ“ βœ“
Yes βœ“
Answer type Entity (open) βœ“ βœ“ βœ“ βœ“ βœ“
Entity (closed) βœ“ βœ“
Number βœ“ βœ“ βœ“
Boolean βœ“ βœ“ βœ“ βœ“
Answer cardinality 0 (unanswerable) βœ“ βœ“
1 βœ“ βœ“ βœ“ βœ“ βœ“
More βœ“ βœ“ βœ“ βœ“
Number of target variables 0 (β‡’ ASK verb) βœ“ βœ“ βœ“ βœ“
1 βœ“ βœ“ βœ“ βœ“ βœ“
2 βœ“ βœ“
Dialogue context Self-sufficient βœ“ βœ“ βœ“ βœ“ βœ“
Coreference βœ“ βœ“
Ellipsis βœ“ βœ“
Meaning Meaningful βœ“ βœ“ βœ“ βœ“ βœ“
Non-sense βœ“

Data splits

Text verbalization is only available for a subset of the test set, referred to as challenge set. Other sample only contain dialogues in the form of follow-up sparql queries.

Train Validation Test
Questions 34,000 5,000 10,000
NL question per query 1
Characters per query 70 (Β± 10)
Tokens per question 7.4 (Β± 2.1)

Additional information

Related datasets

This corpus is part of a set of 5 datasets released for SPARQL-to-Text generation, namely:

Licencing information

  • Content from original dataset: CC-BY 3.0
  • New content: CC BY-SA 4.0

Citation information

This version of the corpus (with normalized SPARQL queries)

@inproceedings{lecorve2022sparql2text,
  title={SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications},
  author={Lecorv\'e, Gw\'enol\'e and Veyret, Morgan and Brabant, Quentin and Rojas-Barahona, Lina M.},
  journal={Proceedings of the Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (AACL-IJCNLP)},
  year={2022}
}

Original version

@article{bordes2015large,
  title={Large-scale simple question answering with memory networks},
  author={Bordes, Antoine and Usunier, Nicolas and Chopra, Sumit and Weston, Jason},
  journal={arXiv preprint arXiv:1506.02075},
  year={2015}
}