ZhankuiHe's picture
Update README.md
5d084df
metadata
task_categories:
  - conversational
language:
  - en
tags:
  - recommendation

Dataset Card for Reddit-Movie-large-V1

Dataset Description

Dataset Summary

This dataset contains the recommendation-related conversations in movie domain, only for research use in e.g., conversational recommendation, long-query retrieval tasks.

This dataset is ranging from Jan. 2012 to Dec. 2022. Another smaller version dataset (from Jan. 2022 to Dec. 2022) can be found here.

Dataset Processing

We dump Reddit conversations from pushshift.io, converted them into raw text on Reddit about movie recommendations from five subreddits:

After that, we process them by:

  1. extracting movie recommendation conversations;
  2. recognizing movie mentions in raw text;
  3. linking movie mentions to existing movie entities in IMDB database.

Since the raw text is quite noisy and processing is not perfect, we do observe some failure cases in our processed data. Thus we use V1 to highlight that this processed version is the first verion. Welcome to contribute to cleaner processed versions (such as V2) in the future, many thanks!

Disclaimer

⚠️ Please note that conversations processed from Reddit raw data may include content that is not entirely conducive to a positive experience (e.g., toxic speech). Exercise caution and discretion when utilizing this information.

Dataset Structure

Data Fields

  • id2name.json provides a lookup table (dictionary) from itemid (e.g., tt0053779) to itemname (e.g., La Dolce Vita (1960)). Note that, the itemid is from IMDB, so that it can be used to align other movie recommendation datasets sharing the same itemid, such as MovieLens.
  • {train, valid, test}.csv are question-answer pairs that can be used for training, validation and testing (split by the dialog created timestamp in their chronological order, ranging from far to recent). There are 12 columns in these *.csv files:
    • conv_id (string): Conversational ID. Since our conversations are collected from reddit posts, we generate conversations by extracting paths in a reddit thread with different replies. An example of conv_id is:
      "t3_rt7enj_0/14" # -> t3_rt7enj is the ID of the first post in the thread, 0 means this is the first path extracted from this thread, and 13 means there are 13 paths in total.
      
    • turn_id (string): Conversational turn ID. For example:
      "t3_rt7enj" # -> We can use (conv_id, turn_id) to uniquely define a row in this dataset.
      
    • turn_order (int64): No.X turn in a given conversation, which can be used to sort turns within the conversation. For example:
      0 # -> It is the first turn in this conversation. Typically, for conversations from Reddit, the number of turns is usually not very large.
      
    • user_id (string): The unique user id. For example:
      "t2_fweij" # -> user id
      
    • is_seeker (bool): Whether the speaker at the current turn is the seeker for recommendation or not. For example
      true # -> It is the seeker (seeker starts a movie requesting conversation on Reddit).
      
    • utc_time (int64): The UTC timestamp when this conversation turn happend. For example:
      1641234238 # -> Try `datetime.fromtimestamp(1641234238)`
      
    • upvotes (int64): The number of upvotes from other reddit users (it is null if this post is the first post in this thread, because upvotes only work for replies.). For example:
      6 # -> 6 upvotes from other Reddit users.
      
    • processed (string): The role and text at this conversation turn (processed version). For example:
      "['USER', 'We decided on tt3501632. They love it so far— very funny!']" # -> [ROLE, Processed string] after `eval()`, where we can match `tt3501632` to real item name using `id2name.json`.
      
    • raw (int64): The role and text at conversation turn (raw-text version). For example:
      "['USER', 'We decided on Thor: Ragnarok. They love it so far— very funny!']" # -> [ROLE, Raw string] after `eval()`, where it is convinient to form it as "USER: We decided on Thor: Ragnarok. They love it so far— very funny!".
      
    • context_processed (string): The role and text pairs as the historical conversation context (processed version). For example:
      "[['USER', 'It’s summer break ... Some of the films we have watched (and they enjoyed) in the past are tt3544112, tt1441952, tt1672078, tt0482571, tt0445590, tt0477348...'], ['SYSTEM', "I'm not big on super hero movies, but even I loved the tt2015381 movies ..."]]"
      # -> [[ROLE, Processed string], [ROLE, Processed string], ...] after `eval()`, where we can match `tt******` to real item name using `id2name.json`.
      
    • context_raw (string): The role and text pairs as the historical conversation context (raw version). For example:
      "[['USER', 'It’s summer break ... Some of the films we have watched (and they enjoyed) in the past are Sing Street, Salmon Fishing in the Yemen, The Life of Pi, The Prestige, LOTR Trilogy, No Country for Old Men...'], ['SYSTEM', "I'm not big on super hero movies, but even I loved the guardians of the Galaxy movies ..."]]"
      # -> [[ROLE, Processed string], [ROLE, Processed string], ...] after `eval()`, where we can form "USER: ...\n SYSTEM: ...\n USER:..." easily.
      
    • context_turn_ids (string): The conversation context turn_ids associated with context [ROLE, Processed string] pairs. For example:
      "['t3_8voapb', 't1_e1p0f5h'] # -> This is the `turn_id`s for the context ['USER', 'It’s summer break ...'], ['SYSTEM', "I'm not big on super hero movie...']. They can used to retrieve more related information like `utc_time` after combining with `conv_id`.
      

Data Splits

We hold the last 20% data (in chronological order according to the created time of the conversation) as testing set. Others can be treated as training samples. We provided a suggested split to split Train into Train and Validation but you are free to try your splits.

Total Train + Validation Test
#Conv. 634,392 570,955 63,437
#Turns 1,669,720 1,514,537 155,183
#Users 36,247 32,676 4,559
#Items 51,203 48,838 20,275

Citation Information

Please cite these two papers if you used this dataset, thanks!

@inproceedings{he23large,
  title = Large language models as zero-shot conversational recommenders",
  author = "Zhankui He and Zhouhang Xie and Rahul Jha and Harald Steck and Dawen Liang and Yesu Feng and Bodhisattwa Majumder and Nathan Kallus and Julian McAuley",
  year = "2023",
  booktitle = "CIKM"
}
@inproceedings{baumgartner2020pushshift,
  title={The pushshift reddit dataset},
  author={Baumgartner, Jason and Zannettou, Savvas and Keegan, Brian and Squire, Megan and Blackburn, Jeremy},
  booktitle={Proceedings of the international AAAI conference on web and social media},
  volume={14},
  pages={830--839},
  year={2020}
}

Please contact Zhankui He if you have any questions or suggestions.