gsarti commited on
Commit
5e078e2
1 Parent(s): 586da28

Initial commit

Browse files
Files changed (3) hide show
  1. README.md +129 -0
  2. ik_nlp_22_slp.py +97 -0
  3. slp3ed.tsv +0 -0
README.md ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: slp3ed-iknlp2022
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - question-answering
19
+ - text-retrieval
20
+ - summarization
21
+ - question-generation
22
+ ---
23
+
24
+ # Dataset Card for IK-NLP-22 Speech and Language Processing
25
+
26
+ ## Table of Contents
27
+
28
+ - [Dataset Card for IK-NLP-22 Speech and Language Processing](#dataset-card-for-ik-nlp-22-speech-and-language-processing)
29
+ - [Table of Contents](#table-of-contents)
30
+ - [Dataset Description](#dataset-description)
31
+ - [Dataset Summary](#dataset-summary)
32
+ - [Projects](#projects)
33
+ - [Languages](#languages)
34
+ - [Dataset Structure](#dataset-structure)
35
+ - [Data Instances](#data-instances)
36
+ - [Paragraphs Configuration](#paragraphs-configuration)
37
+ - [Questions Configuration](#questions-configuration)
38
+ - [Data Splits](#data-splits)
39
+ - [Dataset Creation](#dataset-creation)
40
+ - [Additional Information](#additional-information)
41
+ - [Dataset Curators](#dataset-curators)
42
+ - [Licensing Information](#licensing-information)
43
+ - [Citation Information](#citation-information)
44
+
45
+ ## Dataset Description
46
+
47
+ - **Source:** [Stanford](https://web.stanford.edu/~jurafsky/slp3/)
48
+ - **Point of Contact:** [Gabriele Sarti](g.sarti@rug.nl)
49
+
50
+ ### Dataset Summary
51
+
52
+ This dataset contains chapters extracted from the Speech and Language Processing book (3ed) by Jurafsky and Martin via a semi-automatic procedure (see below for additional details). Moreover, a small set of conceptual questions associated with each chapter is provided alongside possible answers.
53
+
54
+ Only the content of chapters 2 to 11 of the book draft are provided, since these are the ones relevant to the contents of the 2022 edition of the Natural Language Processing course at the Information Science Master's Degree (IK) at the University of Groningen, taught by [Arianna Bisazza](https://research.rug.nl/en/persons/arianna-bisazza) with the assistance of [Gabriele Sarti](https://research.rug.nl/en/persons/gabriele-sarti).
55
+
56
+ *The Speech and Language Processing book was made freely available by the authors [Dan Jurafsky](http://web.stanford.edu/people/jurafsky/) and [James H. Martin](http://www.cs.colorado.edu/~martin/) on the [Stanford University website](https://web.stanford.edu/~jurafsky/slp3/). The present dataset was created for educational purposes, and is based on the draft of the 3rd edition of the book accessed on December 29th, 2021. All rights of the present contents are attributed to the original authors.*
57
+
58
+ ### Projects
59
+
60
+ To be provided.
61
+
62
+ ### Languages
63
+
64
+ The language data of Speech and Language Processing is in English (BCP-47 `en`)
65
+
66
+ ## Dataset Structure
67
+
68
+ ### Data Instances
69
+
70
+ The dataset contains two configurations: `paragraphs` (default) and `questions`.
71
+
72
+ #### Paragraphs Configuration
73
+
74
+ The `paragraphs` configuration contains all the paragraphs of the selected book chapters, each associated with the respective chapter, section and subsection. An example from the `train` split of the `paragraphs` config is provided below. The example belongs to section 2.3 but not to a subsection, so the `n_subsection` and `subsection` fields are empty strings.
75
+
76
+ ```json
77
+ {
78
+ "n_chapter": "2",
79
+ "chapter": "Regular Expressions",
80
+ "n_section": "2.3",
81
+ "section": "Corpora",
82
+ "n_subsection": "",
83
+ "subsection": "",
84
+ "text": "It's also quite common for speakers or writers to use multiple languages in a single communicative act, a phenomenon called code switching. Code switching (2.2) Por primera vez veo a @username actually being hateful! it was beautiful:)"
85
+ }
86
+ ```
87
+
88
+ The text is provided as-is, without further preprocessing or tokenization.
89
+
90
+
91
+ #### Questions Configuration
92
+
93
+ To be completed.
94
+
95
+ ### Data Splits
96
+
97
+ | config| train| test|
98
+ |------------:|-----:|----:|
99
+ |`paragraphs` | 1722 | - |
100
+ |`questions` | TBD | TBD |
101
+
102
+ ### Dataset Creation
103
+
104
+ The contents of the Speech and Language Processing book PDF were extracted using the [PDF to S2ORC JSON Converter](https://github.com/allenai/s2orc-doc2json) by AllenAI. The texts extracted by the converter were then manually cleaned to remove end-of-chapter exercises and other irrelevant content (e.g. tables, TikZ figures, etc.). Some issues in the parsed content were preserved in the final version to maintain a naturalistic setting for the associated projects, promoting the use of data filtering heuristics for students.
105
+
106
+ ## Additional Information
107
+
108
+ ### Dataset Curators
109
+
110
+ For problems on this 🤗 Datasets version, please contact us at [ik-nlp-course@rug.nl](mailto:ik-nlp-course@rug.nl).
111
+
112
+ ### Licensing Information
113
+
114
+ Please refer to the authors' websites for licensing information.
115
+
116
+ ### Citation Information
117
+
118
+ Please cite the authors if you use these corpora in your work:
119
+
120
+ ```bibtex
121
+ @book{slp3ed-iknlp2022,
122
+ author = {Jurafsky, Daniel and Martin, James},
123
+ year = {2021},
124
+ month = {12},
125
+ pages = {1--235, 1--19},
126
+ title = {Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition},
127
+ volume = {3}
128
+ }
129
+ ```
ik_nlp_22_slp.py ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import csv
2
+ import sys
3
+
4
+ import datasets
5
+ from typing import List
6
+
7
+ csv.field_size_limit(sys.maxsize)
8
+
9
+
10
+ _CITATION = """\
11
+ @book{slp3ed-iknlp2022,
12
+ author = {Jurafsky, Daniel and Martin, James},
13
+ year = {2021},
14
+ month = {12},
15
+ pages = {1--235, 1--19},
16
+ title = {Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition},
17
+ volume = {3}
18
+ }
19
+ """
20
+
21
+ _DESCRIPTION = """\
22
+ Paragraphs from the Speech and Language Processing book (3ed) by Jurafsky and Martin extracted semi-automatically
23
+ from Chapters 2 to 11 of the original book draft.
24
+ """
25
+
26
+ _HOMEPAGE = "https://www.rug.nl/masters/information-science/?lang=en"
27
+
28
+ _LICENSE = "See https://web.stanford.edu/~jurafsky/slp3/"
29
+
30
+
31
+ class IkNlp22SlpConfig(datasets.BuilderConfig):
32
+ """BuilderConfig for ItaCoLA."""
33
+
34
+ def __init__(
35
+ self,
36
+ features,
37
+ data_url,
38
+ **kwargs,
39
+ ):
40
+ """
41
+ Args:
42
+ features: `list[string]`, list of the features that will appear in the
43
+ feature dict. Should not include "label".
44
+ data_url: `string`, url to download the zip file from.
45
+ **kwargs: keyword arguments forwarded to super.
46
+ """
47
+ super().__init__(version=datasets.Version("1.0.0"), **kwargs)
48
+ self.data_url = data_url
49
+ self.features = features
50
+
51
+
52
+ class IkNlp22Slp(datasets.GeneratorBasedBuilder):
53
+ VERSION = datasets.Version("1.0.0")
54
+
55
+ BUILDER_CONFIGS = [
56
+ IkNlp22SlpConfig(
57
+ name="paragraphs",
58
+ features=["n_chapter", "chapter", "n_section", "section", "n_subsection", "subsection", "text"],
59
+ data_url="https://huggingface.co/datasets/GroNLP/ik-nlp-22_slp/resolve/main/slp3ed.tsv"
60
+ ),
61
+ ]
62
+
63
+ DEFAULT_CONFIG_NAME = "paragraphs"
64
+
65
+ def _info(self):
66
+ return datasets.DatasetInfo(
67
+ description=_DESCRIPTION,
68
+ features=datasets.Features({feature: datasets.Value("string") for feature in self.config.features}),
69
+ homepage=_HOMEPAGE,
70
+ license=_LICENSE,
71
+ citation=_CITATION,
72
+ )
73
+
74
+ def _split_generators(self, dl_manager):
75
+ """Returns SplitGenerators."""
76
+ data_file = dl_manager.download_and_extract(self.config.data_url)
77
+ return [
78
+ datasets.SplitGenerator(
79
+ name=datasets.Split.TRAIN,
80
+ gen_kwargs={
81
+ "filepath": data_file,
82
+ "split": "train",
83
+ "features": self.config.features,
84
+ },
85
+ ),
86
+ ]
87
+
88
+ def _generate_examples(self, filepath: str, split: str, features: List[str]):
89
+ """Yields examples as (key, example) tuples."""
90
+ with open(filepath, encoding="utf8") as f:
91
+ for id_, row in enumerate(f):
92
+ if id_ == 0:
93
+ continue
94
+ fields = row.strip().split("\t")
95
+ yield id_, {
96
+ k:v.strip() for k,v in zip(features, fields)
97
+ }
slp3ed.tsv ADDED
The diff for this file is too large to render. See raw diff