orkg commited on
Commit
742d828
1 Parent(s): 3a87454

Version 1.0.0 upload

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -1
  2. README.md +114 -1
  3. SciQA.py +117 -0
.gitattributes CHANGED
@@ -51,4 +51,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
51
  # Image files - compressed
52
  *.jpg filter=lfs diff=lfs merge=lfs -text
53
  *.jpeg filter=lfs diff=lfs merge=lfs -text
54
- *.webp filter=lfs diff=lfs merge=lfs -text
 
51
  # Image files - compressed
52
  *.jpg filter=lfs diff=lfs merge=lfs -text
53
  *.jpeg filter=lfs diff=lfs merge=lfs -text
54
+ *.webp filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,116 @@
1
  ---
2
- license: cc-by-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ - auto-generated
5
+ language:
6
+ - en
7
+ language_creators:
8
+ - machine-generated
9
+ license:
10
+ - cc-by-4.0
11
+ multilinguality:
12
+ - monolingual
13
+ pretty_name: 'The SciQA Scientific Question Answering Benchmark for Scholarly Knowledge'
14
+ size_categories:
15
+ - 1K<n<10K
16
+ source_datasets:
17
+ - original
18
+ tags:
19
+ - knowledge-base-qa
20
+ task_categories:
21
+ - question-answering
22
+ task_ids: []
23
  ---
24
+
25
+ # Dataset Card for SciQA
26
+
27
+ ## Table of Contents
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Annotations](#annotations)
40
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
42
+ - [Social Impact of Dataset](#social-impact-of-dataset)
43
+ - [Discussion of Biases](#discussion-of-biases)
44
+ - [Other Known Limitations](#other-known-limitations)
45
+ - [Additional Information](#additional-information)
46
+ - [Dataset Curators](#dataset-curators)
47
+ - [Licensing Information](#licensing-information)
48
+ - [Citation Information](#citation-information)
49
+ - [Contributions](#contributions)
50
+
51
+ ## Dataset Description
52
+
53
+ - **Homepage:** [SciQA Homepage]()
54
+ - **Repository:** [SciQA Repository](https://zenodo.org/record/7744048)
55
+ - **Paper:** The SciQA Scientific Question Answering Benchmark for Scholarly Knowledge
56
+ - **Point of Contact:** [Yaser Jaradeh](mailto:Yaser.Jaradeh@tib.eu)
57
+
58
+ ### Dataset Summary
59
+
60
+ SciQA contains 2,565 SPARQL query - question pairs along with answers fetched from the open research knowledge graph (ORKG) via a Virtuoso SPARQL endpoint, it is a collection of both handcrafted and autogenerated questions and queries. The dataset is split into 70% training, 10% validation and 20% test examples.
61
+
62
+ ## Dataset Structure
63
+
64
+ ### Data Instances
65
+
66
+ An example of a question is given below:
67
+
68
+ ```json
69
+ {
70
+ "id": "AQ2251",
71
+ "query_type": "Factoid",
72
+ "question": {
73
+ "string": "Provide a list of papers that have utilized the Depth DDPPO model and include the links to their code?"
74
+ },
75
+ "paraphrased_question": [],
76
+ "query": {
77
+ "sparql": "SELECT DISTINCT ?code\nWHERE {\n ?model a orkgc:Model;\n rdfs:label ?model_lbl.\n FILTER (str(?model_lbl) = \"Depth DDPPO\")\n ?benchmark orkgp:HAS_DATASET ?dataset.\n ?cont orkgp:HAS_BENCHMARK ?benchmark.\n ?cont orkgp:HAS_MODEL ?model;\n orkgp:HAS_SOURCE_CODE ?code.\n}"
78
+ },
79
+ "template_id": "T07",
80
+ "auto_generated": true,
81
+ "query_shape": "Tree",
82
+ "query_class": "WHICH-WHAT",
83
+ "number_of_patterns": 4,
84
+ }
85
+
86
+ ```
87
+ ### Data Fields
88
+
89
+ - `id`: the id of the question
90
+ - `question`: a string containing the question
91
+ - `paraphrased_question`: a set of paraphrased versions of the question
92
+ - `query`: a SPARQL query that answers the question
93
+ - `query_type`: the type of the query
94
+ - `query_template`: an optional template of the query
95
+ - `query_shape`: a string indicating the shape of the query
96
+ - `query_class`: a string indicating the class of the query
97
+ - `auto_generated`: a boolean indicating whether the question is auto-generated or not
98
+ - `number_of_patterns`: an integer number indicating the number of gtaph patterns in the query
99
+
100
+ ### Data Splits
101
+
102
+ The dataset is split into 70% training, 10% validation and 20% test questions.
103
+
104
+ ## Additional Information
105
+
106
+ ### Licensing Information
107
+
108
+ SciQA is licensed under the [Creative Commons Attribution 4.0 International License (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
109
+
110
+ ### Citation Information
111
+
112
+ In review.
113
+
114
+ ### Contributions
115
+
116
+ Thanks to [@YaserJaradeh](https://github.com/YaserJaradeh) for adding this dataset.
SciQA.py ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """The SciQA Scientific Question Answering Benchmark for Scholarly Knowledge"""
18
+
19
+
20
+ import json
21
+ import os
22
+
23
+ import datasets
24
+
25
+
26
+ logger = datasets.logging.get_logger(__name__)
27
+
28
+
29
+ _CITATION = """
30
+ @article{SciQA,
31
+ title={The SciQA Scientific Question Answering Benchmark for Scholarly Knowledge},
32
+ author={Auer, Sören and Barone, Dante A. C. and Bartz, Cassiano and Cortes, Eduardo G. and Jaradeh, Mohamad Yaser and Karras, Oliver and Koubarakis, Manolis and Mouromtsev, Dmitry and Pliukhin, Dmitrii and Radyush, Daniil and et al.},
33
+ year={2023}
34
+ """
35
+
36
+ _DESCRIPTION = """\
37
+ SciQA contains 2,565 SPARQL query - question pairs along with answers fetched from the open research knowledge graph (ORKG) \
38
+ via a Virtuoso SPARQL endpoint, it is a collection of both handcrafted and autogenerated questions and queries. \
39
+ The dataset is split into 70% training, 10% validation and 20% test examples. The dataset is available as JSON files.
40
+ """
41
+
42
+ _URL = "https://zenodo.org/record/7744048/files/SciQA-dataset.zip"
43
+
44
+
45
+ class SciQA(datasets.GeneratorBasedBuilder):
46
+ """
47
+ The SciQA Scientific Question Answering Benchmark for Scholarly Knowledge.
48
+ """
49
+
50
+ VERSION = datasets.Version("1.0.0")
51
+
52
+ def _info(self):
53
+ return datasets.DatasetInfo(
54
+ # This is the description that will appear on the datasets page.
55
+ description=_DESCRIPTION,
56
+ # datasets.features.FeatureConnectors
57
+ features=datasets.Features(
58
+ {
59
+ "id": datasets.Value("string"),
60
+ "query_type": datasets.Value("string"),
61
+ "question": datasets.dataset_dict.DatasetDict({
62
+ "string": datasets.Value("string")
63
+ }),
64
+ "paraphrased_question": datasets.features.Sequence(datasets.Value("string")),
65
+ "query": datasets.dataset_dict.DatasetDict({
66
+ "sparql": datasets.Value("string")
67
+ }),
68
+ "template_id": datasets.Value("string"),
69
+ "query_shape": datasets.Value("string"),
70
+ "query_class": datasets.Value("string"),
71
+ "auto_generated": datasets.Value("bool"),
72
+ "number_of_patterns": datasets.Value("int32")
73
+ }
74
+ ),
75
+ supervised_keys=None,
76
+ citation=_CITATION,
77
+ )
78
+
79
+ def _split_generators(self, dl_manager):
80
+ """Returns SplitGenerators."""
81
+
82
+ dl_dir = dl_manager.download_and_extract(_URL)
83
+ dl_dir = os.path.join(dl_dir, "SciQA-dataset")
84
+
85
+ return [
86
+ datasets.SplitGenerator(
87
+ name=datasets.Split.TRAIN,
88
+ gen_kwargs={"filepath": os.path.join(dl_dir, "train", "questions.json")},
89
+ ),
90
+ datasets.SplitGenerator(
91
+ name=datasets.Split.VALIDATION,
92
+ gen_kwargs={"filepath": os.path.join(dl_dir, "valid", "questions.json")},
93
+ ),
94
+ datasets.SplitGenerator(
95
+ name=datasets.Split.TEST,
96
+ gen_kwargs={"filepath": os.path.join(dl_dir, "test", "questions.json")},
97
+ ),
98
+ ]
99
+
100
+ def _generate_examples(self, filepath):
101
+ """Yields examples."""
102
+
103
+ with open(filepath, encoding="utf-8") as f:
104
+ data = json.load(f)["questions"]
105
+ for id_, row in enumerate(data):
106
+ yield id_, {
107
+ "id": row["id"],
108
+ "query_type": row["query_type"],
109
+ "question": row["question"],
110
+ "paraphrased_question": row["paraphrased_question"],
111
+ "query": row["query"],
112
+ "template_id": row["template_id"],
113
+ "query_shape": row["query_shape"],
114
+ "query_class": row["query_class"],
115
+ "auto_generated": row["auto_generated"],
116
+ "number_of_patterns": row["number_of_patterns"]
117
+ }