eduvedras commited on
Commit
05f17ec
·
verified ·
1 Parent(s): a1ea87c

Upload 5 files

Browse files
Desc_Questions.py ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """Description and Questions Dataset"""
18
+
19
+
20
+ import json
21
+
22
+ import datasets
23
+ from datasets.tasks import QuestionAnsweringExtractive
24
+ import pandas as pd
25
+
26
+
27
+ logger = datasets.logging.get_logger(__name__)
28
+
29
+
30
+ _CITATION = """\
31
+ @article{2016arXiv160605250R,
32
+ author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
33
+ Konstantin and {Liang}, Percy},
34
+ title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
35
+ journal = {arXiv e-prints},
36
+ year = 2016,
37
+ eid = {arXiv:1606.05250},
38
+ pages = {arXiv:1606.05250},
39
+ archivePrefix = {arXiv},
40
+ eprint = {1606.05250},
41
+ }
42
+ """
43
+
44
+ _DESCRIPTION = """\
45
+ Image descriptions for data science charts
46
+ """
47
+
48
+ _URL = "https://huggingface.co/datasets/eduvedras/Desc_Questions/resolve/main/images.tar.gz"
49
+
50
+ class Desc_QuestionsTargz(datasets.GeneratorBasedBuilder):
51
+
52
+ def _info(self):
53
+ return datasets.DatasetInfo(
54
+ description=_DESCRIPTION,
55
+ features=datasets.Features(
56
+ {
57
+ "Chart": datasets.Image(),
58
+ "Description": datasets.Value("string"),
59
+ "Chart_name": datasets.Value("string"),
60
+ "Questions": datasets.Value("string"),
61
+ }
62
+ ),
63
+ # No default supervised_keys (as we have to pass both question
64
+ # and context as input).
65
+ supervised_keys=None,
66
+ homepage="https://huggingface.co/datasets/eduvedras/Desc_Questions",
67
+ citation=_CITATION,
68
+ task_templates=[
69
+ QuestionAnsweringExtractive(
70
+ question_column="question", context_column="context", answers_column="answers"
71
+ )
72
+ ],
73
+ )
74
+
75
+ def _split_generators(self, dl_manager):
76
+ path = dl_manager.download(_URL)
77
+ image_iters = dl_manager.iter_archive(path)
78
+ metadata_train_path = "https://huggingface.co/datasets/eduvedras/Desc_Questions/resolve/main/desc_questions_dataset_train.csv"
79
+ metadata_test_path = "https://huggingface.co/datasets/eduvedras/Desc_Questions/resolve/main/desc_questions_dataset_test.csv"
80
+
81
+ return [
82
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"images": image_iters,
83
+ "metadata_path": metadata_train_path}),
84
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"images": image_iters,
85
+ "metadata_path": metadata_test_path}),
86
+ ]
87
+
88
+ def _generate_examples(self, images, metadata_path):
89
+ metadata = pd.read_csv(metadata_path, sep=';')
90
+ idx = 0
91
+ for index, row in metadata.iterrows():
92
+ for filepath, image in images:
93
+ filepath = filepath.split('/')[-1]
94
+ if row['Chart'] in filepath:
95
+ yield idx, {
96
+ "Chart": {"path": filepath, "bytes": image.read()},
97
+ "Description": row['description'],
98
+ "Chart_name": row['Chart'],
99
+ "Questions": row['Questions'],
100
+ }
101
+ break
102
+ idx += 1
desc_questions_dataset.csv ADDED
The diff for this file is too large to render. See raw diff
 
desc_questions_dataset_test.csv ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Chart;description;Questions
2
+ Titanic_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Pclass <= 2.5 and the second with the condition Parch <= 0.5.;['It is clear that variable Pclass is one of the five most relevant features.', 'The variable Pclass seems to be one of the four most relevant features.', 'The variable Age discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Age is the first most discriminative variable regarding the class.', 'Variable Pclass is one of the most relevant variables.', 'Variable Age seems to be relevant for the majority of mining tasks.', 'Variables Parch and SibSp seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is higher than 60%.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'The number of True Negatives is higher than the number of False Positives for the presented tree.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'Considering that A=True<=>[Pclass <= 2.5] and B=True<=>[Parch <= 0.5], it is possible to state that Naive Bayes algorithm classifies (A,B), as 0.', 'Considering that A=True<=>[Pclass <= 2.5] and B=True<=>[Parch <= 0.5], it is possible to state that KNN algorithm classifies (not A, B) as 0 for any k ≤ 72.', 'Considering that A=True<=>[Pclass <= 2.5] and B=True<=>[Parch <= 0.5], it is possible to state that KNN algorithm classifies (not A, not B) as 0 for any k ≤ 181.']
3
+ Titanic_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.']
4
+ Titanic_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.']
5
+ Titanic_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.']
6
+ Titanic_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.']
7
+ Titanic_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 5.', 'The decision tree is in overfitting for depths above 3.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.']
8
+ Titanic_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.']
9
+ Titanic_pca.png;A bar chart showing the explained variance ratio of 5 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 4 principal components would imply an error between 10 and 20%.']
10
+ Titanic_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Fare or Pclass can be discarded without losing information.', 'The variable Pclass can be discarded without risking losing information.', 'Variables Age and Parch are redundant, but we can’t say the same for the pair Fare and Pclass.', 'Variables SibSp and Fare are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Age seems to be relevant for the majority of mining tasks.', 'Variables Parch and Fare seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Parch might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Parch previously than variable Age.']
11
+ Titanic_boxplots.png;A set of boxplots of the variables ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'].;['Variable Fare is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Age shows some outliers, but we can’t be sure of the same for variable Pclass.', 'Outliers seem to be a problem in the dataset.', 'Variable Parch shows some outlier values.', 'Variable Parch doesn’t have any outliers.', 'Variable Parch presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.']
12
+ Titanic_histograms_symbolic.png;A set of bar charts of the variables ['Embarked', 'Sex'].;['All variables, but the class, should be dealt with as date.', 'The variable Embarked can be seen as ordinal.', 'The variable Embarked can be seen as ordinal without losing information.', 'Considering the common semantics for Sex and Embarked variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Embarked variable, dummification would be the most adequate encoding.', 'The variable Embarked can be coded as ordinal without losing information.', 'Feature generation based on variable Sex seems to be promising.', 'Feature generation based on the use of variable Sex wouldn’t be useful, but the use of Embarked seems to be promising.', 'Given the usual semantics of Embarked variable, dummification would have been a better codification.', 'It is better to drop the variable Embarked than removing all records with missing values.', 'Not knowing the semantics of Sex variable, dummification could have been a more adequate codification.']
13
+ Titanic_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Age', 'Embarked'].;['Discarding variable Age would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 25% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable Embarked seems to be promising.', 'It is better to drop the variable Embarked than removing all records with missing values.']
14
+ Titanic_class_histogram.png;A bar chart showing the distribution of the target variable Survived.;['Balancing this dataset would be mandatory to improve the results.']
15
+ Titanic_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.']
16
+ Titanic_histograms_numeric.png;A set of histograms of the variables ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'].;['All variables, but the class, should be dealt with as date.', 'The variable Age can be seen as ordinal.', 'The variable Fare can be seen as ordinal without losing information.', 'Variable Age is balanced.', 'It is clear that variable Parch shows some outliers, but we can’t be sure of the same for variable Age.', 'Outliers seem to be a problem in the dataset.', 'Variable Parch shows some outlier values.', 'Variable Fare doesn’t have any outliers.', 'Variable Age presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Fare and Pclass variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Fare variable, dummification would be the most adequate encoding.', 'The variable Age can be coded as ordinal without losing information.', 'Feature generation based on variable SibSp seems to be promising.', 'Feature generation based on the use of variable Fare wouldn’t be useful, but the use of Pclass seems to be promising.', 'Given the usual semantics of Age variable, dummification would have been a better codification.', 'It is better to drop the variable SibSp than removing all records with missing values.', 'Not knowing the semantics of Parch variable, dummification could have been a more adequate codification.']
desc_questions_dataset_train.csv ADDED
The diff for this file is too large to render. See raw diff
 
images.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63333da4d6031ce431073278efc762bac037a178b8ca66c745cb171dd5e2fb35
3
+ size 17305159