adding the 4 variants of the dataset
Browse files- data_set_split_.json +0 -0
- data_set_split_how.json +0 -0
- data_set_split_what.json +0 -0
- data_set_split_which.json +0 -0
- readme.txt +12 -0
data_set_split_.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
data_set_split_how.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
data_set_split_what.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
data_set_split_which.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
readme.txt
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
This dataset is part of the bachelor thesis "Evaluating SQuAD-based Question Answering for the Open Research Knowledge Graph Completion".
|
2 |
+
|
3 |
+
This dataset was created for the finetuning of Bert Based models pre-trained on the SQUaD dataset. The Dataset was created using semi-automatic approach
|
4 |
+
on the ORKG data. The dataset.csv file contains the entire data (all properties) in a tabular for and is unsplit. The json files contain only the necessary
|
5 |
+
fields for training and evaluation, with additional fields (index of start and end of the answers in the abstracts). The data in the json files is split (training data)
|
6 |
+
and evaluation data. We create 4 variants of the training and evaluation sets for each one of the question labels ("no label", "how", "what", "which")
|
7 |
+
|
8 |
+
|
9 |
+
For detailed information on each of the fields in the dataset, refer to section 4.2 (Corpus) of the Thesis document -- link thesis here --
|
10 |
+
|
11 |
+
|
12 |
+
The script used to generate the dataset can be found in the public repository: https://github.com/as18cia/thesis_work
|