File size: 8,140 Bytes
3f28ce1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84e6b4d
3f28ce1
 
 
 
 
 
 
 
 
 
 
 
9e99e25
 
 
 
3f28ce1
 
 
 
 
 
9e99e25
3f28ce1
 
 
 
 
9e99e25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3f28ce1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9e99e25
 
 
 
 
 
 
 
 
 
3f28ce1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
---
task_categories:
- conversational
language:
- en
tags:
- recommendation
---

# Dataset Card for `Reddit-Movie-small-V1`

## Dataset Description

- **Homepage:** https://github.com/AaronHeee/LLMs-as-Zero-Shot-Conversational-RecSys
- **Repository:** https://github.com/AaronHeee/LLMs-as-Zero-Shot-Conversational-RecSys
- **Paper:** To appear
- **Point of Contact:** zhh004@eng.ucsd.edu

### Dataset Summary

This dataset contains the recommendation-related conversations in movie domain, only for research use in e.g., conversational recommendation, long-query retrieval tasks.

This dataset is ranging from Jan. 2022 to Dec. 2022. Another larger version dataset (from Jan. 2012 to Dec. 2022) can be found [here](https://huggingface.co/datasets/ZhankuiHe/reddit_movie_large_v1).


### Dataset Processing

We dump [Reddit](https://reddit.com) conversations from [pushshift.io](https://pushshift.io), converted them into [raw text](https://huggingface.co/datasets/ZhankuiHe/reddit_movie_raw) on Reddit about movie recommendations from five subreddits:

- [r/movies](https://www.reddit.com/r/movies/)
- [r/moviesuggestions](https://www.reddit.com/r/suggestions/)
- [r/bestofnetflix](https://www.reddit.com/r/bestofnetflix/)
- [r/nextflixbestof](https://www.reddit.com/r/netflixbestof/)
- [r/truefilm](https://www.reddit.com/r/truefilm/)


After that, we process them by:
1. extracting movie recommendation conversations;
2. recognizing movie mentions in raw text;
3. linking movie mentions to existing movie entities in [IMDB](https://imdb.com) database.

Since the raw text is quite noisy and processing is not perfect, we do observe some failure cases in our processed data. Thus we use V1 to highlight that this processed version is the first verion. Welcome to contribute to cleaner processed versions (such as V2) in the future, many thanks!

### Disclaimer

⚠️ **Please note that conversations processed from Reddit raw data may include content that is not entirely conducive to a positive experience (e.g., toxic speech). Exercise caution and discretion when utilizing this information.**

## Dataset Structure

### Data Fields

- `id2name.json` provides a lookup table (dictionary) from `itemid` (e.g., `tt0053779`) to `itemname` (e.g., `La Dolce Vita (1960)`). Note that, the `itemid` is from [IMDB](https://imdb.com), so that it can be used to align other movie recommendation datasets sharing the same `itemid`, such as [MovieLens](https://movielens.org/).
- `{train, valid, test}.csv` are question-answer pairs that can be used for training, validation and testing (split by the dialog created timestamp in their chronological order, ranging from far to recent). There are 12 columns in these `*.csv` files:
    - `conv_id (string)`: Conversational ID. Since our conversations are collected from reddit posts, we generate conversations by extracting paths in a reddit thread with different replies. An example of `conv_id` is:
        ```
        "t3_rt7enj_0/14" # -> t3_rt7enj is the ID of the first post in the thread, 0 means this is the first path extracted from this thread, and 13 means there are 13 paths in total.
        ```
    - `turn_id (string)`: Conversational turn ID. For example:
        ```
        "t3_rt7enj" # -> We can use (conv_id, turn_id) to uniquely define a row in this dataset.
        ```
    - `turn_order (int64)`: No.X turn in a given conversation, which can be used to sort turns within the conversation. For example:
        ```
        0 # -> It is the first turn in this conversation. Typically, for conversations from Reddit, the number of turns is usually not very large.
        ```
    - `user_id (string)`: The unique user id. For example:
        ```
        "t2_fweij" # -> user id
        ```
    - `is_seeker (bool)`: Whether the speaker at the current turn is the seeker for recommendation or not. For example
        ```
        true # -> It is the seeker (seeker starts a movie requesting conversation on Reddit).
        ```
    - `utc_time (int64)`: The UTC timestamp when this conversation turn happend. For example:
        ```
        1641234238 # -> Try `datetime.fromtimestamp(1641234238)`
        ```
    - `upvotes (int64)`: The number of upvotes from other reddit users (it is `null` if this post is the first post in this thread, because upvotes only work for replies.). For example:
        ```
        6 # -> 6 upvotes from other Reddit users.
        ```
    - `processed (string)`: The role and text at this conversation turn (processed version). For example:
        ```
        "['USER', 'We decided on tt3501632. They love it so far— very funny!']" # -> [ROLE, Processed string] after `eval()`, where we can match `tt3501632` to real item name using `id2name.json`.
        ```
    - `raw (int64)`: The role and text at conversation turn (raw-text version). For example:
        ```
        "['USER', 'We decided on Thor: Ragnarok. They love it so far— very funny!']" # -> [ROLE, Raw string] after `eval()`, where it is convinient to form it as "USER: We decided on Thor: Ragnarok. They love it so far— very funny!".
        ```
    - `context_processed (string)`: The role and text pairs as the historical conversation context (processed version). For example:
        ```
        "[['USER', 'It’s summer break ... Some of the films we have watched (and they enjoyed) in the past are tt3544112, tt1441952, tt1672078, tt0482571, tt0445590, tt0477348...'], ['SYSTEM', "I'm not big on super hero movies, but even I loved the tt2015381 movies ..."]]"
        # -> [[ROLE, Processed string], [ROLE, Processed string], ...] after `eval()`, where we can match `tt******` to real item name using `id2name.json`.
        ```
    - `context_raw (string)`: The role and text pairs as the historical conversation context (raw version). For example:
        ```
        "[['USER', 'It’s summer break ... Some of the films we have watched (and they enjoyed) in the past are Sing Street, Salmon Fishing in the Yemen, The Life of Pi, The Prestige, LOTR Trilogy, No Country for Old Men...'], ['SYSTEM', "I'm not big on super hero movies, but even I loved the guardians of the Galaxy movies ..."]]"
        # -> [[ROLE, Processed string], [ROLE, Processed string], ...] after `eval()`, where we can form "USER: ...\n SYSTEM: ...\n USER:..." easily.
        ```
    - `context_turn_ids (string)`: The conversation context turn_ids associated with context [ROLE, Processed string] pairs. For example:
        ```
        "['t3_8voapb', 't1_e1p0f5h'] # -> This is the `turn_id`s for the context ['USER', 'It’s summer break ...'], ['SYSTEM', "I'm not big on super hero movie...']. They can used to retrieve more related information like `utc_time` after combining with `conv_id`.
        ```
        
### Data Splits

We hold the last 20% data (in chronological order according to the created time of the conversation) as testing set. Others can be treated as training samples. We provided a suggested split to split Train into Train and Validation but you are free to try your splits.

|  | Total | Train + Validation | Test |
| - | - | - | - |
| #Conv. |  171,773 | 154,597 | 17,176 |
| #Turns  |  419,233  | 377,614 | 41,619 |
| #Users  |   12,508  | 11,477  | 1,384 |
| #Items  | 31,396  | 30,146 | 10,434 |


### Citation Information


Please cite these two papers if you used this dataset, thanks!

```bib
@inproceedings{he23large,
  title = Large language models as zero-shot conversational recommenders",
  author = "Zhankui He and Zhouhang Xie and Rahul Jha and Harald Steck and Dawen Liang and Yesu Feng and Bodhisattwa Majumder and Nathan Kallus and Julian McAuley",
  year = "2023",
  booktitle = "CIKM"
}
```

```bib
@inproceedings{baumgartner2020pushshift,
  title={The pushshift reddit dataset},
  author={Baumgartner, Jason and Zannettou, Savvas and Keegan, Brian and Squire, Megan and Blackburn, Jeremy},
  booktitle={Proceedings of the international AAAI conference on web and social media},
  volume={14},
  pages={830--839},
  year={2020}
}
```


Please contact [Zhankui He](https://aaronheee.github.io) if you have any questions or suggestions.