yonatanko commited on
Commit
f397922
·
verified ·
1 Parent(s): adbd938

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -31
README.md CHANGED
@@ -48,11 +48,11 @@ size_categories:
48
  ### Dataset Summary
49
 
50
  The Relationship Advice dataset is an English-language compilation of posts and their respective comments concerning dating and human romantic relationships. The primary objective of this dataset is to aid LLMs in categorizing responses and providing appropriate answers based on the emotional needs expressed by the writer.
51
- The data was gatherd from two subreddits: [r/dating_advice](https://www.reddit.com/r/dating_advice/) and [r/relationship_advice](https://www.reddit.com/r/relationship_advice/).
52
 
53
  ### Supported Tasks and Leaderboards
54
 
55
- - `text-classification`: This dataset can be utilized to train a model for text classification. The model should categorize the comments into one of 6 labels, considering the post as the broader context.
56
  - `Natural Language Inference`: Given a post and its two comments, the model needs to decide which comment is more helpful to the post writer. Thus, the model must infer the subtle semantics of both comments and their related post.
57
 
58
  ### Languages
@@ -63,65 +63,84 @@ The text in the dataset is in English
63
 
64
  ### Data Instances
65
 
66
- Each data point consists of a post, two comments, two labels for the comments, and a third label indicating which comment is more helpful to the post writer.
67
 
68
  ### Data Fields
69
-
70
- - `q_id`: a string question identifier for each example, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/submissions/) Reddit submission dumps.
71
- - `subreddit`: One of `explainlikeimfive`, `askscience`, or `AskHistorians`, indicating which subreddit the question came from
72
- - `title`: title of the question, with URLs extracted and replaced by `URL_n` tokens
73
- - `title_urls`: list of the extracted URLs, the `n`th element of the list was replaced by `URL_n`
74
- - `selftext`: either an empty string or an elaboration of the question
75
- - `selftext_urls`: similar to `title_urls` but for `self_text`
76
- - `answers`: a list of answers, each answer has:
77
- - `a_id`: a string answer identifier for each answer, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/comments/) Reddit comments dumps.
78
- - `text`: the answer text with the URLs normalized
79
- - `score`: the number of upvotes the answer had received when the dumps were created
80
- - `answers_urls`: a list of the extracted URLs. All answers use the same list, the numbering of the normalization token continues across answer texts
81
 
82
  ### Data Splits
83
 
84
- The data is split into a training, validation and test set for each of the three subreddits. In order to avoid having duplicate questions in across sets, the `title` field of each of the questions were ranked by their tf-idf match to their nearest neighbor and the ones with the smallest value were used in the test and validation sets. The final split sizes are as follow:
 
 
 
 
 
 
 
 
 
 
 
 
 
85
 
86
- | | Train | Valid | Test |
87
- | ----- | ------ | ----- | ---- |
88
- | r/explainlikeimfive examples| 272634 | 9812 | 24512|
89
- | r/askscience examples | 131778 | 2281 | 4462 |
90
- | r/AskHistorians examples | 98525 | 4901 | 9764 |
91
 
92
  ## Dataset Creation
93
 
94
- ### Curation Rationale
95
 
96
- ELI5 was built to provide a testbed for machines to learn how to answer more complex questions, which requires them to find and combine information in a coherent manner. The dataset was built by gathering questions that were asked by community members of three subreddits, including [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), along with the answers that were provided by other users. The [rules of the subreddit](https://www.reddit.com/r/explainlikeimfive/wiki/detailed_rules) make this data particularly well suited to training a model for abstractive question answering: the questions need to seek an objective explanation about well established facts, and the answers provided need to be understandable to a layperson without any particular knowledge domain.
97
 
98
  ### Source Data
99
 
100
  #### Initial Data Collection and Normalization
101
 
102
- The data was obtained by filtering submissions and comments from the subreddits of interest from the XML dumps of the [Reddit forum](https://www.reddit.com/) hosted on [Pushshift.io](https://files.pushshift.io/reddit/).
103
-
104
- In order to further improve the quality of the selected examples, only questions with a score of at least 2 and at least one answer with a score of at least 2 were selected for the dataset. The dataset questions and answers span a period form August 2012 to August 2019.
105
 
106
  #### Who are the source language producers?
107
 
108
- The language producers are users of the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), [r/askscience](https://www.reddit.com/r/askscience/), and [r/AskHistorians](https://www.reddit.com/r/AskHistorians/) subreddits between 2012 and 2019. No further demographic information was available from the data source.
109
 
110
  ### Annotations
111
 
112
- The dataset does not contain any additional annotations.
 
 
 
 
 
 
 
 
 
 
 
 
 
113
 
114
  #### Annotation process
115
 
116
- [N/A]
 
 
 
 
 
117
 
118
  #### Who are the annotators?
119
 
120
- [N/A]
121
 
122
  ### Personal and Sensitive Information
123
 
124
- The authors removed the speaker IDs from the [Pushshift.io](https://files.pushshift.io/reddit/) dumps but did not otherwise anonymize the data. Some of the questions and answers are about contemporary public figures or individuals who appeared in the news.
125
 
126
  ## Considerations for Using the Data
127
 
 
48
  ### Dataset Summary
49
 
50
  The Relationship Advice dataset is an English-language compilation of posts and their respective comments concerning dating and human romantic relationships. The primary objective of this dataset is to aid LLMs in categorizing responses and providing appropriate answers based on the emotional needs expressed by the writer.
51
+ The data was gathered from two subreddits: [r/dating_advice](https://www.reddit.com/r/dating_advice/) and [r/relationship_advice](https://www.reddit.com/r/relationship_advice/).
52
 
53
  ### Supported Tasks and Leaderboards
54
 
55
+ - `text-classification`: This dataset can be used to train a text classification model. The model should categorize the comments into one of 6 labels, considering the post as the broader context.
56
  - `Natural Language Inference`: Given a post and its two comments, the model needs to decide which comment is more helpful to the post writer. Thus, the model must infer the subtle semantics of both comments and their related post.
57
 
58
  ### Languages
 
63
 
64
  ### Data Instances
65
 
66
+ Each data point consists of a post, two comments, two labels, one for each comment (first task), and a third label indicating which comment is more helpful to the post writer (second task).
67
 
68
  ### Data Fields
69
+ - `example_id`: Index of the example, ranged between 1 and 400
70
+ - `post`: The post text
71
+ - `comment_1`: The first comment of the post
72
+ - `comment_2`: The second comment of the post
73
+ - `comment_1_label`: The label of the first comment.
74
+ - `comment_2_label`: The label of the second comment.
75
+ - `batch`: The annotation batch this datapoint belong to. One of "exploration", "evaluation" and "part 3"
 
 
 
 
 
76
 
77
  ### Data Splits
78
 
79
+ The data is split into a training, validation and test set. The samples are picked at random according to the following scheme:
80
+
81
+ The test set (150) consists of samples only from the "part 3" batch since these are the samples that were annotated by the external annotators, thus giving it the highest quality.
82
+
83
+ The validation set (40) consists of samples only from the "evaluation" batch, which is the second highest quality batch.
84
+
85
+ The training set (210), consists of all the rest.
86
+
87
+ | | Train | Dev | Test |
88
+ | ------------ | :-------: | :-----: | :-----: |
89
+ | exploration | 80 | 0 | 0 |
90
+ | evaluation | 40 | 40 | 0 |
91
+ | part 3 | 90 | 0 | 150 |
92
+
93
 
 
 
 
 
 
94
 
95
  ## Dataset Creation
96
 
97
+ ### Significance and Advantages of Utilization
98
 
99
+ The Relationship Advice dataset was created to serve as a testing ground for machines to learn how to respond with greater sensitivity to users' emotional needs. To do so, the machines must be able to identify the type of response they are providing and, if multiple options are available, determine which one would be most appropriate and beneficial for the writer. Reddit provided the foundation for this dataset, as the language used in conversations on the platform is everyday language, and the topics involve a wide range of emotions, requiring a deep understanding of semantics and meanings conveyed in the text. Training machines with this data would help them improve their emotional intelligence and respond accordingly.
100
 
101
  ### Source Data
102
 
103
  #### Initial Data Collection and Normalization
104
 
105
+ The data from both subreddits was gathered using the Reddit API. The posts were filtered to have a maximum of 500 characters and at least two comments, with each comment being less than 500 characters. After the filtering process, 400 randomly sampled posts (along with their comments) were drawn from both subreddits.
 
 
106
 
107
  #### Who are the source language producers?
108
 
109
+ The language producers are users of the [r/dating_advice](https://www.reddit.com/r/dating_advice/) and [r/relationship_advice](https://www.reddit.com/r/relationship_advice/) subreddits between 2022 and 2024. No further demographic information was available from the data source.
110
 
111
  ### Annotations
112
 
113
+ There were two annotation tasks
114
+
115
+ Task 1: Comment classification to one of the following labels
116
+ - Practical Advice
117
+ - Emotional support
118
+ - Commentators' opinion
119
+ - Hurtful
120
+ - Sarcasm
121
+ - Not Relevant
122
+
123
+ Task 2: Given a post and it's two comments, which comment is more helpful to the post writer. The labels are:
124
+ - Comment 1
125
+ - Comment 2
126
+
127
 
128
  #### Annotation process
129
 
130
+ There were to groups that annotated this data - The owners and external annotators.
131
+ The data was split to 3 batches: Exploration (80 items), Evaluation(80 items) and part 3 (240 items).
132
+
133
+ - Exploration batch: After defining the task, the authors began annotating the first 80 samples to identify data patterns and develop annotation guidelines based on their findings.
134
+ - Evaluation batch: Following the drafting of the guidelines, two of the authors proceeded to annotate this batch by the provided annotation guidelines.
135
+ - Part 3 batch: This batch was assigned to external annotators. The first 30 records were given to the annotators for annotation in order to enhance the clarity of the guidelines. After the necessary improvements, the final version of the guidelines was provided to the annotators, and they completed the labeling process.
136
 
137
  #### Who are the annotators?
138
 
139
+ The owners of the dataset comprise two males and one female, while the external annotators, who contributed an alternative perspective to the annotation process, include one male and three females. All annotators are aged between 22 and 27 and are final-semester students at the Data Science and Decisions faculty at Technion.
140
 
141
  ### Personal and Sensitive Information
142
 
143
+ The posts and comments do not contain any personal information and are submitted anonymously. No identifiers regarding the authors were obtained.
144
 
145
  ## Considerations for Using the Data
146