TheGreatRambler commited on
Commit
e891881
1 Parent(s): a436c80

Create README

Browse files
Files changed (1) hide show
  1. README.md +258 -0
README.md ADDED
@@ -0,0 +1,258 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - multilingual
4
+ license:
5
+ - cc-by-nc-sa-4.0
6
+ multilinguality:
7
+ - multilingual
8
+ size_categories:
9
+ - 100M<n<1B
10
+ source_datasets:
11
+ - original
12
+ task_categories:
13
+ - text-generation
14
+ - structure-prediction
15
+ - object-detection
16
+ - text-mining
17
+ - information-retrieval
18
+ - other
19
+ task_ids:
20
+ - other
21
+ pretty_name: Mario Maker 2 ninjis
22
+ ---
23
+
24
+ # Mario Maker 2 ninjis
25
+ Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
26
+
27
+ ## Dataset Description
28
+ The Mario Maker 2 ninjis dataset consists of 3 million ninji replays from Nintendo's online service totaling around 12.5GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
29
+
30
+ ### How to use it
31
+ The Mario Maker 2 ninjis dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code:
32
+
33
+ ```python
34
+ from datasets import load_dataset
35
+
36
+ ds = load_dataset("TheGreatRambler/mm2_ninji", streaming=True, split="train")
37
+ print(next(iter(ds)))
38
+
39
+ #OUTPUT:
40
+ {
41
+ 'data_id': 12171034,
42
+ 'pid': '4748613890518923485',
43
+ 'time': 83388,
44
+ 'replay': [some binary data]
45
+ }
46
+ ```
47
+ Each row is a ninji run in the level denoted by the `data_id` done by the player denoted by the `pid`, The length of this ninji run is `time` in milliseconds.
48
+
49
+ `replay` is a gzip compressed binary file format describing the animation frames and coordinates of the player throughout the run. Parsing the replay is as follows:
50
+
51
+ ```python
52
+ from datasets import load_dataset
53
+ import zlib
54
+ import struct
55
+
56
+ ds = load_dataset("TheGreatRambler/mm2_ninji", streaming=True, split="train")
57
+ row = next(iter(ds))
58
+ replay = zlib.decompress(row["replay"])
59
+
60
+ frames = struct.unpack(">I", replay[0x10:0x14])[0]
61
+ character = replay[0x14]
62
+
63
+ character_mapping = {
64
+ 0: "Mario",
65
+ 1: "Luigi",
66
+ 2: "Toad",
67
+ 3: "Toadette"
68
+ }
69
+
70
+ # player_state is between 0 and 14 and varies between gamestyles
71
+ # as outlined below. Determining the gamestyle of a particular run
72
+ # and rendering the level being played requires TheGreatRambler/mm2_ninji_level
73
+ player_state_base = {
74
+ 0: "Run/Walk",
75
+ 1: "Jump",
76
+ 2: "Swim",
77
+ 3: "Climbing",
78
+ 5: "Sliding",
79
+ 7: "Dry bones shell",
80
+ 8: "Clown car",
81
+ 9: "Cloud",
82
+ 10: "Boot",
83
+ 11: "Walking cat"
84
+ }
85
+
86
+ player_state_nsmbu = {
87
+ 4: "Sliding",
88
+ 6: "Turnaround",
89
+ 10: "Yoshi",
90
+ 12: "Acorn suit",
91
+ 13: "Propeller active",
92
+ 14: "Propeller neutral"
93
+ }
94
+
95
+ player_state_sm3dw = {
96
+ 4: "Sliding",
97
+ 6: "Turnaround",
98
+ 7: "Clear pipe",
99
+ 8: "Cat down attack",
100
+ 13: "Propeller active",
101
+ 14: "Propeller neutral"
102
+ }
103
+
104
+ player_state_smb1 = {
105
+ 4: "Link down slash",
106
+ 5: "Crouching"
107
+ }
108
+
109
+ player_state_smw = {
110
+ 10: "Yoshi",
111
+ 12: "Cape"
112
+ }
113
+
114
+ print("Frames: %d\nCharacter: %s" % (frames, character_mapping[character]))
115
+
116
+ current_offset = 0x3C
117
+ # Ninji updates are reported every 4 frames
118
+ for i in range((frames + 2) // 4):
119
+ flags = replay[current_offset] >> 4
120
+ player_state = replay[current_offset] & 0x0F
121
+ current_offset += 1
122
+
123
+ x = struct.unpack("<H", replay[current_offset:current_offset + 2])[0]
124
+ current_offset += 2
125
+ y = struct.unpack("<H", replay[current_offset:current_offset + 2])[0]
126
+ current_offset += 2
127
+
128
+ if flags & 0b00000110:
129
+ unk1 = replay[current_offset]
130
+ current_offset += 1
131
+
132
+ in_subworld = flags & 0b00001000
133
+
134
+ print("Frame %d:\n Flags: %s,\n Animation state: %d,\n X: %d,\n Y: %d,\n In subworld: %s"
135
+ % (i, bin(flags), player_state, x, y, in_subworld))
136
+
137
+ #OUTPUT:
138
+ Frames: 5006
139
+ Character: Mario
140
+ Frame 0:
141
+ Flags: 0b0,
142
+ Animation state: 0,
143
+ X: 2672,
144
+ Y: 2288,
145
+ In subworld: 0
146
+ Frame 1:
147
+ Flags: 0b0,
148
+ Animation state: 0,
149
+ X: 2682,
150
+ Y: 2288,
151
+ In subworld: 0
152
+ Frame 2:
153
+ Flags: 0b0,
154
+ Animation state: 0,
155
+ X: 2716,
156
+ Y: 2288,
157
+ In subworld: 0
158
+ ...
159
+ Frame 1249:
160
+ Flags: 0b0,
161
+ Animation state: 1,
162
+ X: 59095,
163
+ Y: 3749,
164
+ In subworld: 0
165
+ Frame 1250:
166
+ Flags: 0b0,
167
+ Animation state: 1,
168
+ X: 59246,
169
+ Y: 3797,
170
+ In subworld: 0
171
+ Frame 1251:
172
+ Flags: 0b0,
173
+ Animation state: 1,
174
+ X: 59402,
175
+ Y: 3769,
176
+ In subworld: 0
177
+ ```
178
+
179
+ You can also download the full dataset. Note that this will download ~12.5GB:
180
+ ```python
181
+ ds = load_dataset("TheGreatRambler/mm2_ninji", split="train")
182
+ ```
183
+
184
+ ## Data Structure
185
+
186
+ ### Data Instances
187
+
188
+ ```python
189
+ {
190
+ 'data_id': 12171034,
191
+ 'pid': '4748613890518923485',
192
+ 'time': 83388,
193
+ 'replay': [some binary data]
194
+ }
195
+ ```
196
+
197
+ ### Data Fields
198
+
199
+ |Field|Type|Description|
200
+ |---|---|---|
201
+ |data_id|int|The data ID of the level this run occured in|
202
+ |pid|string|Player ID of the player|
203
+ |time|int|Length in milliseconds of the run|
204
+ |replay|bytes|Replay file of this run|
205
+
206
+ ### Data Splits
207
+
208
+ The dataset only contains a train split.
209
+
210
+ <!-- TODO create detailed statistics -->
211
+ <!--
212
+ ## Dataset Statistics
213
+
214
+ The dataset contains 115M files and the sum of all the source code file sizes is 873 GB (note that the size of the dataset is larger due to the extra fields). A breakdown per language is given in the plot and table below:
215
+
216
+ ![dataset-statistics](https://huggingface.co/datasets/codeparrot/github-code/resolve/main/github-code-stats-alpha.png)
217
+
218
+ | | Language |File Count| Size (GB)|
219
+ |---:|:-------------|---------:|-------:|
220
+ | 0 | Java | 19548190 | 107.70 |
221
+ | 1 | C | 14143113 | 183.83 |
222
+ | 2 | JavaScript | 11839883 | 87.82 |
223
+ | 3 | HTML | 11178557 | 118.12 |
224
+ | 4 | PHP | 11177610 | 61.41 |
225
+ | 5 | Markdown | 8464626 | 23.09 |
226
+ | 6 | C++ | 7380520 | 87.73 |
227
+ | 7 | Python | 7226626 | 52.03 |
228
+ | 8 | C# | 6811652 | 36.83 |
229
+ | 9 | Ruby | 4473331 | 10.95 |
230
+ | 10 | GO | 2265436 | 19.28 |
231
+ | 11 | TypeScript | 1940406 | 24.59 |
232
+ | 12 | CSS | 1734406 | 22.67 |
233
+ | 13 | Shell | 1385648 | 3.01 |
234
+ | 14 | Scala | 835755 | 3.87 |
235
+ | 15 | Makefile | 679430 | 2.92 |
236
+ | 16 | SQL | 656671 | 5.67 |
237
+ | 17 | Lua | 578554 | 2.81 |
238
+ | 18 | Perl | 497949 | 4.70 |
239
+ | 19 | Dockerfile | 366505 | 0.71 |
240
+ | 20 | Haskell | 340623 | 1.85 |
241
+ | 21 | Rust | 322431 | 2.68 |
242
+ | 22 | TeX | 251015 | 2.15 |
243
+ | 23 | Batchfile | 236945 | 0.70 |
244
+ | 24 | CMake | 175282 | 0.54 |
245
+ | 25 | Visual Basic | 155652 | 1.91 |
246
+ | 26 | FORTRAN | 142038 | 1.62 |
247
+ | 27 | PowerShell | 136846 | 0.69 |
248
+ | 28 | Assembly | 82905 | 0.78 |
249
+ | 29 | Julia | 58317 | 0.29 |
250
+ -->
251
+
252
+ ## Dataset Creation
253
+
254
+ The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
255
+
256
+ ## Considerations for Using the Data
257
+
258
+ The dataset contains no harmful language or depictions.