Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
1K<n<10K
Language Creators:
other
Tags:
License:
odehaene commited on
Commit
2dc6bb9
1 Parent(s): c264259

feat: Add dataset

Browse files
Files changed (4) hide show
  1. README.md +112 -0
  2. dataset.jsonl +3 -0
  3. requirements.txt +4 -0
  4. scrapper.py +204 -0
README.md ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators: []
3
+ language_creators:
4
+ - other
5
+ languages:
6
+ - en
7
+ licenses:
8
+ - cc-by-sa-3.0
9
+ - cc-by-nc-2.5
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: XKCD
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets: []
16
+ task_categories:
17
+ - image-to-text
18
+ - feature-extraction
19
+ task_ids: []
20
+ ---
21
+
22
+ # Dataset Card for "XKCD"
23
+
24
+ ## Table of Contents
25
+ - [Table of Contents](#table-of-contents)
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Dataset Structure](#dataset-structure)
29
+ - [Data Instances](#data-instances)
30
+ - [Data Fields](#data-fields)
31
+ - [Dataset Creation](#dataset-creation)
32
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
33
+ - [Additional Information](#additional-information)
34
+ - [Licensing Information](#licensing-information)
35
+ - [Contributions](#contributions)
36
+
37
+ ## Dataset Description
38
+
39
+ - **Homepage:** [https://xkcd.com/](https://xkcd.com/), [https://www.explainxkcd.com](https://www.explainxkcd.com)
40
+ - **Repository:** [Hugging Face repository](https://huggingface.co/datasets/olivierdehaene/xkcd/tree/main)
41
+
42
+ ### Dataset Summary
43
+
44
+ XKCD is an export of all XKCD comics with their transcript and explanation scrapped from
45
+ [https://explainxkcd.com](https://explainxkcd.com).
46
+
47
+ ## Dataset Structure
48
+
49
+ ### Data Instances
50
+
51
+ - `id`: `1`
52
+ - `title`: `Barrel - Part 1`
53
+ - `image_title`: `Barrel - Part 1`
54
+ - `url`: `https://www.xkcd.com/1`
55
+ - `image_url`: `https://imgs.xkcd.com/comics/barrel_cropped_(1).jpg`
56
+ - `explained_url`: `https://www.explainxkcd.com/wiki/index.php/1:_Barrel_-_Part_1`
57
+ - `transcript`: `[A boy sits in a barrel which is floating in an ocean.] Boy: i wonder where i'll float next?
58
+ [A smaller frame with a zoom out of the boy in the barrel seen from afar. The barrel drifts into the distance. Nothing
59
+ else can be seen.]`
60
+ - `explanation`: `The comic shows a young boy floating in a barrel in an ocean that doesn't have a visible end. It
61
+ comments on the unlikely optimism and perhaps naïveté people sometimes display. The boy is completely lost and seems
62
+ hopelessly alone, without any plan or control of the situation. Yet, rather than afraid or worried, he is instead
63
+ quietly curious: "I wonder where I'll float next?" Although not necessarily the situation in this comic, this is a
64
+ behavior people often exhibit when there is nothing they can do about a problematic situation for a long time; they may
65
+ have given up hope or developed a cavalier attitude as a coping mechanism. The title text expands on the philosophical
66
+ content, with the boy representing the average human being: wandering through life with no real plan, quietly
67
+ optimistic, always opportunistic and clueless as to what the future may hold. The isolation of the boy may also
68
+ represent the way in which we often feel lost through life, never knowing quite where we are, believing that there is
69
+ no one to whom to turn. This comic could also reflect on Randall's feelings towards creating xkcd in the first place;
70
+ unsure of what direction the web comic would turn towards, but hopeful that it would eventually become the popular web
71
+ comic that we know today. This is the first in a six-part series of comics whose parts were randomly published during
72
+ the first several dozen strips. The series features a character that is not consistent with what would quickly become
73
+ the xkcd stick figure style. The character is in a barrel. In 1110: Click and Drag there is a reference to this comic
74
+ at 1 North, 48 East . After Randall released the full The Boy and his Barrel story on xkcd, it has been clear that the
75
+ original Ferret story should also be included as part of the barrel series. The full series can be found here . They
76
+ are listed below in the order Randall chose for the short story above: `
77
+
78
+ ### Data Fields
79
+
80
+ - `id`
81
+ - `title`
82
+ - `url`: xkcd.com URL
83
+ - `image_url`
84
+ - `explained_url`: explainxkcd.com URL
85
+ - `transcript`: english text transcript of the comic
86
+ - `explanation`: english explanation of the comic
87
+
88
+ ## Dataset Creation
89
+
90
+ The dataset was scrapped from both explainxkcd.com and xkcd.com.
91
+ The dataset is therefore licensed under the Creative Commons Attribution-ShareAlike 3.0 license for
92
+ the `transcript` and `explanation` fields, while the image itself is licensed under the
93
+ Creative Commons Attribution-NonCommercial 2.5 license.
94
+
95
+ See the [Copyrights](https://www.explainxkcd.com/wiki/index.php/explain_xkcd:Copyrights) page from
96
+ explainxkcd.com for more explanations.
97
+
98
+ ## Considerations for Using the Data
99
+
100
+ As the data was scrapped, it is entirely possible that some fields are missing part of the original data.
101
+
102
+ ## Additional Information
103
+
104
+ ### Licensing Information
105
+
106
+ The dataset is licensed under the Creative Commons Attribution-ShareAlike 3.0 license for
107
+ the `transcript` and `explanation` fields, while the images are licensed under the
108
+ Creative Commons Attribution-NonCommercial 2.5 license.
109
+
110
+ ### Contributions
111
+
112
+ Thanks to [@OlivierDehaene](https://github.com/OlivierDehaene) for adding this dataset.
dataset.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f864e9d3d5e1e49531994b22e7e963bd54958c2789aa76e273d7e312b90c4b4
3
+ size 12874573
requirements.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ aiolimiter==1.0.0
2
+ aiohttp==3.8.1
3
+ beautifulsoup4==4.11.1
4
+ pandas==1.4.2
scrapper.py ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import aiohttp
2
+ import asyncio
3
+ import re
4
+
5
+ import pandas as pd
6
+
7
+ from pathlib import Path
8
+ from aiolimiter import AsyncLimiter
9
+ from typing import Dict, List
10
+ from bs4 import BeautifulSoup
11
+ from bs4.element import Tag
12
+
13
+ LIST_COMICS_500_URL = (
14
+ "https://www.explainxkcd.com/wiki/index.php/List_of_all_comics_(1-500)"
15
+ )
16
+ LIST_COMICS_FULL_URL = (
17
+ "https://www.explainxkcd.com/wiki/index.php/List_of_all_comics_(full)"
18
+ )
19
+
20
+
21
+ def walk_tag(initial_tag: Tag, end_tag_name: str) -> str:
22
+ """
23
+ Walk the HTML tree and aggregates all text between an
24
+ initial tag and an end tag.
25
+
26
+ Parameters
27
+ ----------
28
+ initial_tag: BeautifulSoup
29
+ end_tag_name: str
30
+
31
+ Returns
32
+ -------
33
+ aggregated_text: str
34
+ """
35
+ result = []
36
+ current_tag = initial_tag
37
+
38
+ # Walk the HTML
39
+ while True:
40
+ if current_tag.name in ["p", "dl"]:
41
+ result.append(current_tag.get_text(separator=" ", strip=True))
42
+ elif current_tag.name == end_tag_name:
43
+ # We reached the end tag, break
44
+ break
45
+ current_tag = current_tag.next_sibling
46
+ return "\n".join(result)
47
+
48
+
49
+ async def parse_url_html(
50
+ session: aiohttp.ClientSession, url: str, throttler: AsyncLimiter
51
+ ) -> BeautifulSoup:
52
+ """
53
+ Parse the HTML content of a given URL.
54
+ The request is sent asynchronously and using a provided request throttler.
55
+ If the request fails, we retry up to 5 times.
56
+
57
+ Parameters
58
+ ----------
59
+ session: aiohttp.ClientSession
60
+ url: str
61
+ throttler: AsyncLimiter
62
+
63
+ Returns
64
+ -------
65
+ BeautifulSoup
66
+ """
67
+ for _ in range(5):
68
+ try:
69
+ # prevent issues with rate limiters
70
+ async with throttler:
71
+ async with session.get(url, raise_for_status=True) as resp:
72
+ html = await resp.text()
73
+ return BeautifulSoup(html, "html.parser")
74
+ # request failed
75
+ except aiohttp.ClientError:
76
+ continue
77
+
78
+
79
+ async def scrap_comic(
80
+ session: aiohttp.ClientSession, explained_url: str, throttler: AsyncLimiter
81
+ ) -> Dict[str, str]:
82
+ """
83
+ Try to scrap all information for a given XKCD comic using its `explainxkcd.com` URL
84
+
85
+ Parameters
86
+ ----------
87
+ session: aiohttp.ClientSession
88
+ explained_url: str
89
+ throttler: AsyncLimiter
90
+
91
+ Returns
92
+ -------
93
+ Dict[str, str]
94
+ """
95
+ soup = await parse_url_html(session, explained_url, throttler)
96
+
97
+ # Parse id and title
98
+ title_splits = soup.find("h1").text.split(":")
99
+ if len(title_splits) > 1:
100
+ id = title_splits[0]
101
+ title = "".join(title_splits[1:]).strip()
102
+ else:
103
+ id = None
104
+ title = "".join(title_splits).strip()
105
+
106
+ # Parse explanation
107
+ explanation_soup = soup.find("span", {"id": "Explanation"})
108
+ try:
109
+ explanation = walk_tag(explanation_soup.parent, "span")
110
+ except:
111
+ explanation = None
112
+
113
+ # Parse transcript
114
+ transcript_soup = soup.find("span", {"id": "Transcript"})
115
+ try:
116
+ transcript = walk_tag(transcript_soup.parent, "span")
117
+ except:
118
+ transcript = None
119
+
120
+ xkcd_url = f"https://www.xkcd.com/{id}"
121
+ xkcd_soup = await parse_url_html(session, xkcd_url, throttler)
122
+
123
+ # Parse image title
124
+ try:
125
+ image = xkcd_soup.find("div", {"id": "comic"}).img
126
+ if title in image:
127
+ image_title = image["title"]
128
+ else:
129
+ image_title = image["alt"]
130
+ except:
131
+ image_title = None
132
+
133
+ # Parse image url
134
+ try:
135
+ image_url = xkcd_soup.find(text=re.compile("https://imgs.xkcd.com"))
136
+ except:
137
+ image_url = None
138
+
139
+ return dict(
140
+ id=id,
141
+ title=title,
142
+ image_title=image_title,
143
+ url=xkcd_url,
144
+ image_url=image_url,
145
+ explained_url=explained_url,
146
+ transcript=transcript,
147
+ explanation=explanation,
148
+ )
149
+
150
+
151
+ async def scap_comic_urls(
152
+ session: aiohttp.ClientSession, comic_list_url: str
153
+ ) -> List[str]:
154
+ """
155
+ Scrap all XKCD comic URLs from the `explainxkcd.com` website.
156
+
157
+ Parameters
158
+ ----------
159
+ session: aiohttp.ClientSession
160
+ comic_list_url: str
161
+
162
+ Returns
163
+ -------
164
+ urls: List[str]
165
+ """
166
+ async with session.get(comic_list_url) as resp:
167
+ html = await resp.text()
168
+ soup = BeautifulSoup(html, "html.parser")
169
+
170
+ # Hack to easily find comics
171
+ create_spans = soup.find_all("span", {"class": "create"})
172
+ return [
173
+ "https://www.explainxkcd.com" + span.parent.a["href"] for span in create_spans
174
+ ]
175
+
176
+
177
+ async def main():
178
+ """
179
+ Scrap XKCD dataset
180
+ """
181
+ # Throttle to 10 requests per second
182
+ throttler = AsyncLimiter(max_rate=10, time_period=1)
183
+ async with aiohttp.ClientSession() as session:
184
+ # Get all comic urls
185
+ comic_urls = await scap_comic_urls(
186
+ session, LIST_COMICS_500_URL
187
+ ) + await scap_comic_urls(session, LIST_COMICS_FULL_URL)
188
+
189
+ # Scrap all comics asynchronously
190
+ data = await asyncio.gather(
191
+ *[scrap_comic(session, url, throttler) for url in comic_urls]
192
+ )
193
+
194
+ df = (
195
+ pd.DataFrame.from_records(data)
196
+ .dropna(subset=["id"])
197
+ .astype({"id": "int32"})
198
+ .sort_values("id")
199
+ )
200
+ df.to_json(Path(__file__).parent / "dataset.jsonl", orient="records", lines=True)
201
+
202
+
203
+ if __name__ == "__main__":
204
+ asyncio.run(main())