Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
100K<n<1M
Language Creators:
machine-generated
Annotations Creators:
machine-generated
Source Datasets:
original
Tags:
License:
system HF staff commited on
Commit
81350ed
0 Parent(s):

Update files from the datasets library (from 1.18.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.18.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +158 -0
  3. dataset_infos.json +1 -0
  4. dummy/1.0.0/dummy_data.zip +3 -0
  5. text2log.py +83 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ language_creators:
5
+ - machine-generated
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: 'text2log'
13
+ size_categories:
14
+ - 100K<n<1M
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - conditional-text-generation
19
+ task_ids:
20
+ - machine-translation
21
+ ---
22
+
23
+ # Dataset Card for text2log
24
+
25
+ ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-instances)
33
+ - [Data Splits](#data-instances)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+
48
+ ## Dataset Description
49
+
50
+ - **Homepage:**
51
+ - **Repository:** [GitHub](https://github.com/alevkov/text2log)
52
+ - **Paper:**
53
+ - **Leaderboard:**
54
+ - **Point of Contact:** https://github.com/alevkov
55
+
56
+ ### Dataset Summary
57
+
58
+ The dataset contains 100,000 simple English sentences selected and filtered from `enTenTen15` and their translation into First Order Logic (FOL) using `ccg2lambda`.
59
+
60
+ ### Supported Tasks and Leaderboards
61
+
62
+ 'semantic-parsing': The data set is used to train models which can generate FOL statements from natural language text
63
+
64
+ ### Languages
65
+
66
+ en-US
67
+
68
+ ## Dataset Structure
69
+
70
+ ### Data Instances
71
+
72
+ ```
73
+ {
74
+ 'clean':'All things that are new are good.',
75
+ 'trans':'all x1.(_thing(x1) -> (_new(x1) -> _good(x1)))'
76
+ }
77
+ ```
78
+
79
+ ### Data Fields
80
+
81
+ - 'clean': a simple English sentence
82
+ - 'trans': the corresponding translation into Lambda Dependency-based Compositional Semantics
83
+
84
+ ### Data Splits
85
+
86
+ No predefined train/test split is given. The authors used a 80/20 split
87
+
88
+ ## Dataset Creation
89
+
90
+ ### Curation Rationale
91
+
92
+ The text2log data set is used to improve FOL statement generation from natural text
93
+
94
+ ### Source Data
95
+
96
+ #### Initial Data Collection and Normalization
97
+
98
+ Short text samples selected from enTenTen15
99
+
100
+ #### Who are the source language producers?
101
+
102
+ See https://www.sketchengine.eu/ententen-english-corpus/
103
+
104
+ ### Annotations
105
+
106
+ #### Annotation process
107
+
108
+ Machine generated using https://github.com/mynlp/ccg2lambda
109
+
110
+ #### Who are the annotators?
111
+
112
+ none
113
+
114
+ ### Personal and Sensitive Information
115
+
116
+ The dataset does not contain personal or sensitive information.
117
+
118
+ ## Considerations for Using the Data
119
+
120
+ ### Social Impact of Dataset
121
+
122
+ [Needs More Information]
123
+
124
+ ### Discussion of Biases
125
+
126
+ [Needs More Information]
127
+
128
+ ### Other Known Limitations
129
+
130
+ [Needs More Information]
131
+
132
+ ## Additional Information
133
+
134
+ ### Dataset Curators
135
+
136
+ [Needs More Information]
137
+
138
+ ### Licensing Information
139
+
140
+ None given
141
+
142
+ ### Citation Information
143
+ ```bibtex
144
+ @INPROCEEDINGS{9401852,
145
+ author={Levkovskyi, Oleksii and Li, Wei},
146
+ booktitle={SoutheastCon 2021},
147
+ title={Generating Predicate Logic Expressions from Natural Language},
148
+ year={2021},
149
+ volume={},
150
+ number={},
151
+ pages={1-8},
152
+ doi={10.1109/SoutheastCon45413.2021.9401852}
153
+ }
154
+ ```
155
+
156
+ ### Contributions
157
+
158
+ Thanks to [@apergo-ai](https://github.com/apergo-ai) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "The dataset contains about 100,000 simple English sentences selected and filtered from enTenTen15 and their translation into First Order Logic (FOL) Lambda Dependency-based Compositional Semantics using ccg2lambda.\n", "citation": "@INPROCEEDINGS{9401852, author={Levkovskyi, Oleksii and Li, Wei}, booktitle={SoutheastCon 2021}, title={Generating Predicate Logic Expressions from Natural Language}, year={2021}, volume={}, number={}, pages={1-8}, doi={10.1109/SoutheastCon45413.2021.9401852}}\n", "homepage": "https://github.com/alevkov/text2log", "license": "none provided", "features": {"sentence": {"dtype": "string", "id": null, "_type": "Value"}, "fol_translation": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "text2log", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10358134, "num_examples": 101931, "dataset_name": "text2log"}}, "download_checksums": {"https://raw.githubusercontent.com/apergo-ai/text2log/main/dat/text2log_clean.csv": {"num_bytes": 9746473, "checksum": "1cdfcd5ece1e95837880d552d910d132620b0b41afd25c0bf4d0a35966fb8fd8"}}, "download_size": 9746473, "post_processing_size": null, "dataset_size": 10358134, "size_in_bytes": 20104607}}
dummy/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46129ad203decbf13151c8bc2fb87797e9a5b733064714a6b34b29c3ed1909d3
3
+ size 408
text2log.py ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """The text2log dataset"""
17
+
18
+
19
+ import csv
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @INPROCEEDINGS{9401852, author={Levkovskyi, Oleksii and Li, Wei}, booktitle={SoutheastCon 2021}, title={Generating Predicate Logic Expressions from Natural Language}, year={2021}, volume={}, number={}, pages={1-8}, doi={10.1109/SoutheastCon45413.2021.9401852}}
26
+ """
27
+
28
+ _DESCRIPTION = """\
29
+ The dataset contains about 100,000 simple English sentences selected and filtered from enTenTen15 and their translation into First Order Logic (FOL) Lambda Dependency-based Compositional Semantics using ccg2lambda.
30
+ """
31
+
32
+ _HOMEPAGE = "https://github.com/alevkov/text2log"
33
+
34
+ _LICENSE = "none provided"
35
+
36
+
37
+ _URLS = {
38
+ "csv": "https://raw.githubusercontent.com/apergo-ai/text2log/main/dat/text2log_clean.csv",
39
+ "zip": "https://raw.githubusercontent.com/apergo-ai/text2log/main/dat/text2log_clean.zip",
40
+ }
41
+
42
+
43
+ class Text2log(datasets.GeneratorBasedBuilder):
44
+ """Simple English sentences and FOL representations using LDbCS"""
45
+
46
+ VERSION = datasets.Version("1.0.0")
47
+
48
+ def _info(self):
49
+
50
+ features = datasets.Features(
51
+ {
52
+ "sentence": datasets.Value("string"),
53
+ "fol_translation": datasets.Value("string"),
54
+ }
55
+ )
56
+ return datasets.DatasetInfo(
57
+ description=_DESCRIPTION,
58
+ supervised_keys=None,
59
+ features=features,
60
+ homepage=_HOMEPAGE,
61
+ license=_LICENSE,
62
+ citation=_CITATION,
63
+ )
64
+
65
+ def _split_generators(self, dl_manager):
66
+ """Returns SplitGenerators."""
67
+ test_path = dl_manager.download_and_extract(_URLS["csv"])
68
+ return [
69
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": test_path}),
70
+ ]
71
+
72
+ def _generate_examples(self, filepath):
73
+ """Generate text2log dataset examples."""
74
+ with open(filepath, encoding="utf-8") as csv_file:
75
+ csv_reader = csv.reader(
76
+ csv_file, quotechar='"', delimiter=";", quoting=csv.QUOTE_ALL, skipinitialspace=True
77
+ )
78
+ next(csv_reader)
79
+ for id_, row in enumerate(csv_reader):
80
+ yield id_, {
81
+ "sentence": str(row[0]),
82
+ "fol_translation": str(row[1]),
83
+ }