File size: 14,329 Bytes
7c643c6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 |
![Header](https://github.com/holodata/vtuber-livechat-dataset/blob/master/.github/kaggle-dataset-header.png?raw=true)
# VTuber 500M: Live Chat and Moderation Events
VTuber 500M is a huge collection of hundreds of millions of live chat, super chat, and moderation events (ban and deletion) all across Virtual YouTubers' live streams, ready for academic research and any kind of NLP projects.
Download the dataset from [Kaggle Datasets](https://www.kaggle.com/uetchy/vtuber-livechat) and join `#livechat-dataset` channel on [holodata Discord](https://holodata.org/discord) for discussions.
## Provenance
- **Source:** YouTube live chat events collected by our [Honeybee](https://github.com/holodata/honeybee) cluster. [Holodex](https://holodex.net) is a stream index provider for Honeybee which covers Hololive, Nijisanji, 774inc, etc.
- **Temporal Coverage:**
- Chats: from 2021-01-15T05:15:33Z
- Superchats: from 2021-03-16T08:19:38Z
- **Update Frequency:**
- At least once per month
## Research Ideas
- Toxic Chat Classification
- Spam Detection
- Demographic Visualization
- Superchat Analysis
- Sentence Transformer for Live Chats
See [public notebooks](https://www.kaggle.com/uetchy/vtuber-livechat/code) for ideas.
> We employed [Honeybee](https://github.com/holodata/honeybee) cluster to collect real-time live chat events across major Vtubers' live streams. All sensitive data such as author name or author profile image are omitted from the dataset, and author channel id is anonymized by SHA-1 hashing algorithm with a grain of salt.
## Versions
### Standard version
Standard version is available at [Kaggle Datasets](https://www.kaggle.com/uetchy/vtuber-livechat).
| filename | summary | size |
| ---------------------- | -------------------------------- | -------- |
| `channels.csv` | Channel index | < 1 MB |
| `chat_stats.csv` | Chat statistics | < 1 MB |
| `superchat_stats.csv` | Super Chat statistics | < 1 MB |
| `chats_%Y-%m.csv` | Live chat events (~ 500,000,000) | ~ 50 GB |
| `superchats_%Y-%m.csv` | Super chat events (~ 2,000,000) | ~ 200 MB |
| `deletion_events.csv` | Deletion events | ~ 150 MB |
| `ban_events.csv` | Ban events | ~ 25 MB |
### Full version
Full version is only available to those approved by the admins. If you are interested in conducting research or analysis using the dataset, please reach us at `#livechat-dataset` channel on [holodata Discord server](https://holodata.org/discord) or at `uechiy@acm.org` (for organizations).
| filename | summary | size |
| ---------------------- | ---------------------------------- | -------- |
| `channels.csv` | Channel index | < 1 MB |
| `chat_stats.csv` | Chat statistics | < 1 MB |
| `superchat_stats.csv` | Super Chat statistics | < 1 MB |
| `chats_%Y-%m.csv` | Live chat messages (~ 500,000,000) | ~ 90 GB |
| `superchats_%Y-%m.csv` | Super chat messages (~ 2,000,000) | ~ 400 MB |
| `deletion_events.csv` | Deletion events | ~ 150 MB |
| `ban_events.csv` | Ban events | ~ 25 MB |
### [❤️🩹 Sensai](https://github.com/holodata/sensai-dataset)
Sensai is a toxic chat dataset consists of live chats from Virtual YouTubers' live streams.
| filename | summary | size |
| ------------------------- | -------------------------------------------------------------- | -------- |
| `chats_flagged_%Y-%m.csv` | Chats flagged as either deleted or banned by mods (3,100,000+) | ~ 400 MB |
| `chats_nonflag_%Y-%m.csv` | Non-flagged chats (3,000,000+) | ~ 300 MB |
## Dataset Breakdown
> Ban and deletion are equivalent to `markChatItemsByAuthorAsDeletedAction` and `markChatItemAsDeletedAction` respectively.
### Channels (`channels.csv`)
| column | type | description |
| ----------------- | --------------- | ---------------------- |
| channelId | string | channel id |
| name | string | channel name |
| englishName | nullable string | channel name (English) |
| affiliation | string | channel affiliation |
| group | nullable string | group |
| subscriptionCount | number | subscription count |
| videoCount | number | uploads count |
| photo | string | channel icon |
Inactive channels have `INACTIVE` in `group` column.
### Chat Statistics (`chat_stats.csv`)
| column | type | description |
| -------------- | ------ | -------------------------------------------------- |
| channelId | string | channel id |
| period | string | interested period (%Y-%M) |
| chats | number | number of chats |
| memberChats | number | number of chats with membership status attached |
| uniqueChatters | number | number of unique chatters |
| uniqueMembers | number | number of unique members appeared on live chat |
| bannedChatters | number | number of unique chatters marked as banned by mods |
| deletedChats | number | number of chats deleted by mods |
### Super Chat Statistics (`superchat_stats.csv`)
| column | type | description |
| -------------------- | ------ | ---------------------------------- |
| channelId | string | channel id |
| period | string | interested period (%Y-%M) |
| superChats | number | number of super chats |
| uniqueSuperChatters | number | number of unique super chatters |
| totalSC | number | total amount of super chats (JPY) |
| averageSC | number | average amount of super chat (JPY) |
| totalMessageLength | number | total message length |
| averageMessageLength | number | average mesage length |
| mostFrequentCurrency | string | most frequent currency |
| mostFrequentColor | string | most frequent color |
### Chats (`chats_%Y-%m.csv`)
| column | type | description | in standard version |
| --------------- | ---------------- | ---------------------------- | ------------------------ |
| timestamp | string | ISO 8601 UTC timestamp | seconds are omitted |
| id | string | anonymized chat id | N/A |
| authorChannelId | string | anonymized author channel id | |
| channelId | string | source channel id | |
| videoId | string | source video id | |
| body | string | chat message | N/A |
| membership | string | membership status | N/A |
| isMember | nullable boolean | is member (null if unknown) | only in standard version |
| isModerator | boolean | is channel moderator | N/A |
| isVerified | boolean | is verified account | N/A |
#### Membership status
| value | duration |
| ----------------- | ------------------------- |
| unknown | Indistinguishable |
| non-member | 0 |
| less than 1 month | < 1 month |
| 1 month | >= 1 month, < 2 months |
| 2 months | >= 2 months, < 6 months |
| 6 months | >= 6 months, < 12 months |
| 1 year | >= 12 months, < 24 months |
| 2 years | >= 24 months |
#### Pandas usage
Set `keep_default_na` to `False` and `na_values` to `''` in `read_csv`. Otherwise, chat message like `NA` would incorrectly be treated as NaN value.
```python
chats = pd.read_csv('../input/vtuber-livechat/chats_2021-03.csv',
na_values='',
keep_default_na=False,
index_col='timestamp',
parse_dates=True)
```
### Superchats (`chats_:year:-:month:.csv`)
| column | type | description | in standard version |
| --------------- | --------------- | ---------------------------- | ------------------- |
| timestamp | string | ISO 8601 UTC timestamp | seconds are omitted |
| amount | number | purchased amount | |
| currency | string | three-letter currency symbol | |
| color | string | color | N/A |
| significance | number | significance | |
| body | nullable string | chat message | N/A |
| id | string | anonymized chat id | N/A |
| authorChannelId | string | anonymized author channel id | |
| videoId | string | source video id | N/A |
| channelId | string | source channel id | |
#### Color and Significance
| color | significance | purchase amount (¥) | purchase amount ($) | max. message length |
| --------- | ------------ | ------------------- | ------------------- | ------------------- |
| blue | 1 | ¥ 100 - 199 | $ 1.00 - 1.99 | 0 |
| lightblue | 2 | ¥ 200 - 499 | $ 2.00 - 4.99 | 50 |
| green | 3 | ¥ 500 - 999 | $ 5.00 - 9.99 | 150 |
| yellow | 4 | ¥ 1000 - 1999 | $ 10.00 - 19.99 | 200 |
| orange | 5 | ¥ 2000 - 4999 | $ 20.00 - 49.99 | 225 |
| magenta | 6 | ¥ 5000 - 9999 | $ 50.00 - 99.99 | 250 |
| red | 7 | ¥ 10000 - 50000 | $ 100.00 - 500.00 | 270 - 350 |
#### Pandas usage
Set `keep_default_na` to `False` and `na_values` to `''` in `read_csv`. Otherwise, chat message like `NA` would incorrectly be treated as NaN value.
```python
import pandas as pd
from glob import iglob
sc = pd.concat([
pd.read_csv(f,
na_values='',
keep_default_na=False,
index_col='timestamp',
parse_dates=True)
for f in iglob('../input/vtuber-livechat/superchats_*.csv')
],
ignore_index=False)
sc.sort_index(inplace=True)
```
### Deletion Events (`deletion_events.csv`)
| column | type | description |
| --------- | ------- | ---------------------------- |
| timestamp | string | UTC timestamp |
| id | string | anonymized chat id |
| retracted | boolean | is deleted by author oneself |
| videoId | string | source video id |
| channelId | string | source channel id |
#### Pandas usage
Insert `deleted_by_mod` column to `chats` DataFrame:
```python
chats = pd.read_csv('../input/vtuber-livechat/chats_2021-03.csv',
na_values='',
keep_default_na=False)
delet = pd.read_csv('../input/vtuber-livechat/deletion_events.csv',
usecols=['id', 'retracted'])
delet = delet[delet['retracted'] == 0]
delet['deleted_by_mod'] = True
chats = pd.merge(chats, delet[['id', 'deleted_by_mod']], how='left')
chats['deleted_by_mod'].fillna(False, inplace=True)
```
### Ban Events (`ban_events.csv`)
Here **Ban** means either to place user in time out or to permanently hide the user's comments on the channel's current and future live streams. This mixup is due to the fact that these actions are indistinguishable from others with the extracted data from `markChatItemsByAuthorAsDeletedAction` event.
| column | type | description |
| --------------- | ------ | --------------------- |
| timestamp | string | UTC timestamp |
| authorChannelId | string | anonymized channel id |
| videoId | string | source video id |
| channelId | string | source channel id |
#### Pandas usage
Insert `banned` column to `chats` DataFrame:
```python
chats = pd.read_csv('../input/vtuber-livechat/chats_2021-03.csv',
na_values='',
keep_default_na=False)
ban = pd.read_csv('../input/vtuber-livechat/ban_events.csv',
usecols=['authorChannelId', 'videoId'])
ban['banned'] = True
chats = pd.merge(chats, ban, on=['authorChannelId', 'videoId'], how='left')
chats['banned'].fillna(False, inplace=True)
```
## Consideration
### Anonymization
`id` and `authorChannelId` are anonymized by SHA-1 hashing algorithm with a pinch of undisclosed salt.
### Handling Custom Emojis
All custom emojis are replaced with a Unicode replacement character `U+FFFD`.
### Redundant Ban and Deletion Events
Bans and deletions from multiple moderators for the same person or chat will be logged separately. For simplicity, you can safely ignore all but the first line recorded in time order.
## Citation
```latex
@misc{vtuber-livechat-dataset,
author={Yasuaki Uechi},
title={VTuber 500M: Large Scale Virtual YouTubers Live Chat Dataset},
year={2021},
month={3},
version={31},
url={https://github.com/holodata/vtuber-livechat-dataset}
}
```
## License
- Code: [MIT License](https://github.com/holodata/vtuber-livechat-dataset/blob/master/LICENSE)
- Dataset: [ODC Public Domain Dedication and Licence (PDDL)](https://opendatacommons.org/licenses/pddl/1-0/index.html)
|