TheGreatRambler commited on
Commit
0346107
1 Parent(s): fa3ef3c

Create README

Browse files
Files changed (2) hide show
  1. README.md +206 -0
  2. __init__.py +1 -0
README.md ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - multilingual
4
+ license:
5
+ - cc-by-nc-sa-4.0
6
+ multilinguality:
7
+ - multilingual
8
+ size_categories:
9
+ - 10M<n<100M
10
+ source_datasets:
11
+ - original
12
+ task_categories:
13
+ - text-generation
14
+ - structure-prediction
15
+ - object-detection
16
+ - text-mining
17
+ - information-retrieval
18
+ - other
19
+ task_ids:
20
+ - other
21
+ pretty_name: Mario Maker 2 level comments
22
+ ---
23
+
24
+ # Mario Maker 2 level comments
25
+ Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
26
+
27
+ ## Dataset Description
28
+ The Mario Maker 2 level comment dataset consists of 31.9 million level comments from Nintendo's online service totaling around 23GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
29
+
30
+ ### How to use it
31
+ The Mario Maker 2 level comment dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code:
32
+
33
+ ```python
34
+ from datasets import load_dataset
35
+
36
+ ds = load_dataset("TheGreatRambler/mm2_level_comments", streaming=True, split="train")
37
+ print(next(iter(ds)))
38
+
39
+ #OUTPUT:
40
+ {
41
+ 'data_id': 3000006,
42
+ 'comment_id': '20200430072710528979_302de3722145c7a2_2dc6c6',
43
+ 'type': 2,
44
+ 'pid': '3471680967096518562',
45
+ 'posted': 1561652887,
46
+ 'clear_required': 0,
47
+ 'text': '',
48
+ 'reaction_image_id': 10,
49
+ 'custom_image': [some binary data],
50
+ 'has_beaten': 0,
51
+ 'x': 557,
52
+ 'y': 64,
53
+ 'reaction_face': 0,
54
+ 'unk8': 0,
55
+ 'unk10': 0,
56
+ 'unk12': 0,
57
+ 'unk14': [some binary data],
58
+ 'unk17': 0
59
+ }
60
+ ```
61
+ Comments can be one of three types: text, reaction image or custom image. `type` can be used with the enum below to identify different kinds of comments. Custom images are binary PNGs.
62
+
63
+ You can also download the full dataset. Note that this will download ~23GB:
64
+ ```python
65
+ ds = load_dataset("TheGreatRambler/mm2_level_comments", split="train")
66
+ ```
67
+
68
+ ## Data Structure
69
+
70
+ ### Data Instances
71
+
72
+ ```python
73
+ {
74
+ 'data_id': 3000006,
75
+ 'comment_id': '20200430072710528979_302de3722145c7a2_2dc6c6',
76
+ 'type': 2,
77
+ 'pid': '3471680967096518562',
78
+ 'posted': 1561652887,
79
+ 'clear_required': 0,
80
+ 'text': '',
81
+ 'reaction_image_id': 10,
82
+ 'custom_image': [some binary data],
83
+ 'has_beaten': 0,
84
+ 'x': 557,
85
+ 'y': 64,
86
+ 'reaction_face': 0,
87
+ 'unk8': 0,
88
+ 'unk10': 0,
89
+ 'unk12': 0,
90
+ 'unk14': [some binary data],
91
+ 'unk17': 0
92
+ }
93
+ ```
94
+
95
+ ### Data Fields
96
+
97
+ |Field|Type|Description|
98
+ |---|---|---|
99
+ |data_id|int|The data ID of the level this comment appears on|
100
+ |comment_id|string|Comment ID|
101
+ |type|int|Type of comment, enum below|
102
+ |pid|string|Player ID of the comment creator|
103
+ |posted|int|UTC timestamp of when this comment was created|
104
+ |clear_required|int|Whether this comment requires a clear to view|
105
+ |text|string|If the comment type is text, the text of the comment|
106
+ |reaction_image_id|int|If this comment is a reaction image, the id of the reaction image, enum below|
107
+ |custom_image|bytes|If this comment is a custom drawing, the custom drawing as a PNG binary|
108
+ |has_beaten|int|Whether the user had beaten the level when they created the comment|
109
+ |x|int|The X position of the comment in game|
110
+ |y|int|The Y position of the comment in game|
111
+ |reaction_face|int|The reaction face of the mii of this user, enum below|
112
+ |unk8|int|Unknown|
113
+ |unk10|int|Unknown|
114
+ |unk12|int|Unknown|
115
+ |unk14|bytes|Unknown|
116
+ |unk17|int|Unknown|
117
+
118
+ ### Data Splits
119
+
120
+ The dataset only contains a train split.
121
+
122
+ ## Enums
123
+
124
+ The dataset contains some enum integer fields. This can be used to convert back to their string equivalents:
125
+
126
+ ```python
127
+ CommentType = {
128
+ 0: "Custom Image",
129
+ 1: "Text",
130
+ 2: "Reaction Image"
131
+ }
132
+
133
+ CommentReactionImage = {
134
+ 0: "Nice!",
135
+ 1: "Good stuff!",
136
+ 2: "So tough...",
137
+ 3: "EASY",
138
+ 4: "Seriously?!",
139
+ 5: "Wow!",
140
+ 6: "Cool idea!",
141
+ 7: "SPEEDRUN!",
142
+ 8: "How?!",
143
+ 9: "Be careful!",
144
+ 10: "So close!",
145
+ 11: "Beat it!"
146
+ }
147
+
148
+ CommentReactionFace = {
149
+ 0: "Normal",
150
+ 16: "Wink",
151
+ 1: "Happy",
152
+ 4: "Surprised",
153
+ 18: "Scared",
154
+ 3: "Confused"
155
+ }
156
+ ```
157
+
158
+ <!-- TODO create detailed statistics -->
159
+ <!--
160
+ ## Dataset Statistics
161
+
162
+ The dataset contains 115M files and the sum of all the source code file sizes is 873 GB (note that the size of the dataset is larger due to the extra fields). A breakdown per language is given in the plot and table below:
163
+
164
+ ![dataset-statistics](https://huggingface.co/datasets/codeparrot/github-code/resolve/main/github-code-stats-alpha.png)
165
+
166
+ | | Language |File Count| Size (GB)|
167
+ |---:|:-------------|---------:|-------:|
168
+ | 0 | Java | 19548190 | 107.70 |
169
+ | 1 | C | 14143113 | 183.83 |
170
+ | 2 | JavaScript | 11839883 | 87.82 |
171
+ | 3 | HTML | 11178557 | 118.12 |
172
+ | 4 | PHP | 11177610 | 61.41 |
173
+ | 5 | Markdown | 8464626 | 23.09 |
174
+ | 6 | C++ | 7380520 | 87.73 |
175
+ | 7 | Python | 7226626 | 52.03 |
176
+ | 8 | C# | 6811652 | 36.83 |
177
+ | 9 | Ruby | 4473331 | 10.95 |
178
+ | 10 | GO | 2265436 | 19.28 |
179
+ | 11 | TypeScript | 1940406 | 24.59 |
180
+ | 12 | CSS | 1734406 | 22.67 |
181
+ | 13 | Shell | 1385648 | 3.01 |
182
+ | 14 | Scala | 835755 | 3.87 |
183
+ | 15 | Makefile | 679430 | 2.92 |
184
+ | 16 | SQL | 656671 | 5.67 |
185
+ | 17 | Lua | 578554 | 2.81 |
186
+ | 18 | Perl | 497949 | 4.70 |
187
+ | 19 | Dockerfile | 366505 | 0.71 |
188
+ | 20 | Haskell | 340623 | 1.85 |
189
+ | 21 | Rust | 322431 | 2.68 |
190
+ | 22 | TeX | 251015 | 2.15 |
191
+ | 23 | Batchfile | 236945 | 0.70 |
192
+ | 24 | CMake | 175282 | 0.54 |
193
+ | 25 | Visual Basic | 155652 | 1.91 |
194
+ | 26 | FORTRAN | 142038 | 1.62 |
195
+ | 27 | PowerShell | 136846 | 0.69 |
196
+ | 28 | Assembly | 82905 | 0.78 |
197
+ | 29 | Julia | 58317 | 0.29 |
198
+ -->
199
+
200
+ ## Dataset Creation
201
+
202
+ The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
203
+
204
+ ## Considerations for Using the Data
205
+
206
+ The dataset consists of comments from many different Mario Maker 2 players globally and as such their text could contain harmful language. Harmful depictions could also be present in the custom images.
__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ # To allow relative imports in Python