ZhuofengLi commited on
Commit
0e490a5
·
verified ·
1 Parent(s): 9c6d663

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -71,3 +71,4 @@ Goodreads-History/raw/goodreads_reviews_history_biography.json filter=lfs diff=l
71
  Goodreads-Mystery/raw/goodreads_book_genres_initial.json filter=lfs diff=lfs merge=lfs -text
72
  Goodreads-Mystery/raw/goodreads_books_mystery_thriller_crime.json filter=lfs diff=lfs merge=lfs -text
73
  Goodreads-Mystery/raw/goodreads_reviews_mystery_thriller_crime.json filter=lfs diff=lfs merge=lfs -text
 
 
71
  Goodreads-Mystery/raw/goodreads_book_genres_initial.json filter=lfs diff=lfs merge=lfs -text
72
  Goodreads-Mystery/raw/goodreads_books_mystery_thriller_crime.json filter=lfs diff=lfs merge=lfs -text
73
  Goodreads-Mystery/raw/goodreads_reviews_mystery_thriller_crime.json filter=lfs diff=lfs merge=lfs -text
74
+ reddit/raw/reddit.csv filter=lfs diff=lfs merge=lfs -text
amazon_baby/baby.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Amazon-Baby Datasets
2
+
3
+ ## Dataset Description
4
+ The Amazon-Baby dataset is a shopping network. It includes information about books, users and reviews. Nodes represent items and users. Text on a item node is the description of the item. Text on a user node is the `reviwer`. The item text includes the following information: `This product, available from [store] and called [title], has an average rating of [average_rating]. Key features include [features], and according to the description, [description]. The price is [price], and additional details [details].` Edges represent relationships between items and users. Text on an edge is a user's review of a item including following information: `This review, titled [title], gives the product a rating of [rating]. The reviewer stated, [text]`.
5
+
6
+
7
+ ## Graph Machine Learning Tasks
8
+
9
+ ### Link Prediction
10
+ Link prediction in the Amazon-Baby dataset involves predicting potential connections between users and items. The goal is to predict whether a user will purchase a item.
11
+
12
+ ### Node Classification
13
+ Node classification tasks in the Amazon-Baby dataset include predicting the items's category.
14
+
15
+
16
+ ## Dataset Source
17
+ https://amazon-reviews-2023.github.io/data_processing/5core.html
amazon_baby/emb/baby_bert_base_uncased_512_cls_edge.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8cc4ff268c919f112deb2187d08f8b2b17b4da16fd35df46cf97150294c4443
3
+ size 1906304872
amazon_baby/emb/baby_bert_base_uncased_512_cls_node.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b79c1a6036da750010561663a984afeea70c2d221aa315798f2b855861851262
3
+ size 286910824
amazon_baby/processed/baby.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c35e75bbe1cd469e0b5caf8f18e655e762d6dfd0d9ef4b4a74dcd781b716689
3
+ size 333053386
amazon_baby/raw/process_final_baby.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
amazon_movie/emb/movie_bert_base_uncased_512_cls_edge.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12edecb61be891ca91169c4f50f889cf328ae8d2cc494954e3de4a6d9d3eb79d
3
+ size 2607412141
amazon_movie/emb/movie_bert_base_uncased_512_cls_node.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6077a32623f6da96086934d5cca772a4961c51921d400709c290de2e52c35ca
3
+ size 267283885
amazon_movie/movie.md ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Amazon-Movie Datasets
2
+
3
+ ## Dataset Description
4
+ The Amazon-Movie dataset is a shopping network. It includes information about books, users and reviews. Nodes represent items and users. Text on a item node is the description of the item. Text on a user node is the `reviwer`. The item text includes the following information: `The product titled '[title]'. It features [feature] and is about [description], making it an excellent choice for [fit]. This product is priced at [price] and comes from the brand [brand]. It ranks [rank] and was released on [date].` Edges represent relationships between items and users. Text on an edge is a user's review of a item including following information: `Reviewer [reviewerName] left a review on [reviewTime], giving the product [rating] stars. In his/her review, he/she wrote: [reviewText]. His/Her summary was [summary].`.
5
+
6
+
7
+ ## Graph Machine Learning Tasks
8
+
9
+ ### Link Prediction
10
+ Link prediction in the Amazon-Movie dataset involves predicting potential connections between users and items. The goal is to predict whether a user will purchase a item.
11
+
12
+ ### Node Classification
13
+ Node classification tasks in the Amazon-Movie dataset include predicting the items's category.
14
+
15
+ ## Dataset Source
16
+ https://cseweb.ucsd.edu/~jmcauley/datasets/amazon_v2/
amazon_movie/processed/movie.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:871ecd8bb0352ac55b9ba4d40ec21e3f3b7853a87ee4183684589966e9cac6c9
3
+ size 1945870917
amazon_movie/raw/process_final_movie.ipynb ADDED
@@ -0,0 +1,316 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 17,
6
+ "id": "2b881572b62f8ce1",
7
+ "metadata": {
8
+ "ExecuteTime": {
9
+ "end_time": "2024-10-09T13:27:12.237540800Z",
10
+ "start_time": "2024-10-09T13:26:27.495918500Z"
11
+ },
12
+ "collapsed": false
13
+ },
14
+ "outputs": [],
15
+ "source": [
16
+ "import json\n",
17
+ "path = \"Movies_and_TV_5.json\"\n",
18
+ "dict_edge = {} #example: 8842281e1d1347389f2ab93d60773d4d|23310161 : One of my favorite books.\n",
19
+ "dict_num_to_id = {} # reorder the node's id\n",
20
+ "edge_score = []\n",
21
+ "count = 0\n",
22
+ "review_text = \"Reviewer [reviewerName] left a review on [reviewTime], giving the product [rating] stars. In his/her review, he/she wrote: [reviewText]. His/Her summary was [summary].\"\n",
23
+ "with open(path) as f:\n",
24
+ " for line in f:\n",
25
+ " d = json.loads(line)\n",
26
+ " edge = d[\"reviewerID\"] + \"|\" + d[\"asin\"]\n",
27
+ " try:\n",
28
+ " reviewtext = review_text.replace(\"[reviewerName]\", d[\"reviewerName\"])\n",
29
+ " except:\n",
30
+ " reviewtext = review_text.replace(\"[reviewerName]\", \"\")\n",
31
+ " if d[\"reviewTime\"] == \"\":\n",
32
+ " reviewtext = reviewtext.replace(\"[reviewTime]\", \"Unknown reviewtime\")\n",
33
+ " else:\n",
34
+ " reviewtext = reviewtext.replace(\"[reviewTime]\", d[\"reviewTime\"])\n",
35
+ " if d[\"overall\"] == \"\":\n",
36
+ " reviewtext = reviewtext.replace(\"[rating]\", \"Unknown\")\n",
37
+ " else:\n",
38
+ " reviewtext = reviewtext.replace(\"[rating]\", str(d[\"overall\"]))\n",
39
+ " reviewtext = reviewtext.replace(\"[reviewText]\", d[\"reviewText\"])\n",
40
+ " if d[\"summary\"] == \"\":\n",
41
+ " reviewtext = reviewtext.replace(\"[summary]\", \"Unknown\")\n",
42
+ " else:\n",
43
+ " reviewtext = reviewtext.replace(\"[summary]\", d[\"summary\"])\n",
44
+ " dict_edge[edge] = reviewtext\n",
45
+ " edge_score.append(d[\"overall\"])\n",
46
+ " if d[\"reviewerID\"] not in dict_num_to_id: # user node \n",
47
+ " dict_num_to_id[d[\"reviewerID\"]] = count\n",
48
+ " count += 1\n",
49
+ " if d[\"asin\"] not in dict_num_to_id: # goods node\n",
50
+ " dict_num_to_id[d[\"asin\"]] = count\n",
51
+ " count += 1\n",
52
+ " "
53
+ ]
54
+ },
55
+ {
56
+ "cell_type": "code",
57
+ "execution_count": 27,
58
+ "id": "acb9e595af870544",
59
+ "metadata": {
60
+ "ExecuteTime": {
61
+ "start_time": "2024-10-09T13:27:12.279999300Z"
62
+ },
63
+ "collapsed": false,
64
+ "is_executing": true
65
+ },
66
+ "outputs": [],
67
+ "source": [
68
+ "import json\n",
69
+ "dict_id_to_text = {}\n",
70
+ "dictid_to_label = {}\n",
71
+ "nodes_texts = \"The product titled '[title]'. It features [feature] and is about [description], making it an excellent choice for [fit]. This product is priced at [price] and comes from the brand [brand]. It ranks [rank] and was released on [date].\"\n",
72
+ "# nodes_texts = \"The product titled '[title]' falls under the Movies & TV category. It features [feature] and is about [description], making it a great fit for [fit]. This product is sold for [price] and is from the brand [brand]. This product has a rank of [rank] and was released on [date]. For more details, check out the [imageURL] or the high-resolution image [imageURLHighRes].\"\n",
73
+ "with open(\"meta_Movies_and_TV.json\") as f:\n",
74
+ " for line in f:\n",
75
+ " d = json.loads(line)\n",
76
+ " label_list = []\n",
77
+ " for x in d[\"category\"]:\n",
78
+ " label_list.append(x)\n",
79
+ " dictid_to_label[d[\"asin\"]] = label_list\n",
80
+ " product_text = nodes_texts.replace(\"[title]\", d[\"title\"])\n",
81
+ " category_text = \", \".join(label_list[1:])\n",
82
+ " product_text = product_text.replace(\"[category]\", category_text)\n",
83
+ " if d[\"feature\"] == []:\n",
84
+ " product_text = product_text.replace(\"[feature]\",\"Unknown feature\")\n",
85
+ " else:\n",
86
+ " feature_text = \", \".join(d[\"feature\"])\n",
87
+ " product_text = product_text.replace(\"[feature]\",feature_text)\n",
88
+ " if d[\"description\"] == []:\n",
89
+ " product_text = product_text.replace(\"[description]\",\"Unknown description\")\n",
90
+ " else:\n",
91
+ " description_text = \", \".join(d[\"description\"])\n",
92
+ " product_text = product_text.replace(\"[description]\",description_text)\n",
93
+ " if d[\"fit\"] == \"\":\n",
94
+ " product_text = product_text.replace(\"[fit]\",\"Unknown fit\")\n",
95
+ " else:\n",
96
+ " product_text = product_text.replace(\"[fit]\",d[\"fit\"])\n",
97
+ " if d[\"price\"] == \"\" or d[\"price\"][0] != \"$\":\n",
98
+ " product_text = product_text.replace(\"[price]\",\"Unknown price\")\n",
99
+ " else:\n",
100
+ " product_text = product_text.replace(\"[price]\",d[\"price\"])\n",
101
+ " if d[\"brand\"] == \"\":\n",
102
+ " product_text = product_text.replace(\"[brand]\",\"Unknown brand\")\n",
103
+ " else:\n",
104
+ " product_text = product_text.replace(\"[brand]\",d[\"brand\"])\n",
105
+ " if d[\"rank\"] == \"\":\n",
106
+ " product_text = product_text.replace(\"[rank]\",\"Unknown rank\")\n",
107
+ " else:\n",
108
+ " try:\n",
109
+ " product_text = product_text.replace(\"[rank]\",d[\"rank\"])\n",
110
+ " product_text = product_text.replace(\"in Movies & TV (\",\"\")\n",
111
+ " except:\n",
112
+ " product_text = product_text.replace(\"[rank]\",\"Unknown rank\")\n",
113
+ " if d[\"date\"] == \"\":\n",
114
+ " product_text = product_text.replace(\"[date]\",\"Unknown date\")\n",
115
+ " else:\n",
116
+ " product_text = product_text.replace(\"[date]\",d[\"date\"])\n",
117
+ " if d[\"imageURL\"] == []:\n",
118
+ " product_text = product_text.replace(\"[imageURL]\",\"Unknown imageURL\")\n",
119
+ " else:\n",
120
+ " imageURL_text = \", \".join(d[\"imageURL\"])\n",
121
+ " product_text = product_text.replace(\"[imageURL]\",imageURL_text)\n",
122
+ " if d[\"imageURLHighRes\"] == []:\n",
123
+ " product_text = product_text.replace(\"[imageURLHighRes]\",\"Unknown imageURLHighRes\")\n",
124
+ " else:\n",
125
+ " imageURLHighRes_text = \", \".join(d[\"imageURLHighRes\"])\n",
126
+ " product_text = product_text.replace(\"[imageURLHighRes]\",imageURLHighRes_text)\n",
127
+ " dict_id_to_text[d[\"asin\"]] = product_text"
128
+ ]
129
+ },
130
+ {
131
+ "cell_type": "code",
132
+ "execution_count": 28,
133
+ "id": "5e69e274cb42bf36",
134
+ "metadata": {
135
+ "collapsed": false,
136
+ "is_executing": true
137
+ },
138
+ "outputs": [],
139
+ "source": [
140
+ "edge1 = [] \n",
141
+ "edge2 = [] # edge1 edge2 are to generate edge_index\n",
142
+ "text_nodes = [None] * len(dict_num_to_id)\n",
143
+ "text_edges = []\n",
144
+ "text_node_labels = [-1] * len(dict_num_to_id)"
145
+ ]
146
+ },
147
+ {
148
+ "cell_type": "code",
149
+ "execution_count": null,
150
+ "id": "388c334a",
151
+ "metadata": {},
152
+ "outputs": [],
153
+ "source": [
154
+ "\"The product titled 'An American Christmas Carol VHS'. It features Unknown feature and is about In Depression-era New England, a miserly businessman named Benedict Slade receives a long-overdue attitude adjustment one Christmas eve when he is visited by three ghostly figures who resemble three of the people whose possessions Slade had seized to collect on unpaid loans. Assuming the roles of the Ghosts of Christmas Past, Present, and Future from Charles Dickens' classic story, the three apparitions force Slade to face the consequences of his skinflint ways, and he becomes a caring, generous, amiable man., making it an excellent choice for Unknown fit. This product is priced at .a-box-inner{background-color:#fff}#alohaBuyBoxWidget .selected{background-color:#fffbf3;border-color:#e77600;box-shadow:0 0 3px rgba(228,121,17,.5)}#alohaBuyBoxWidget .contract-not-available{color:gray}#aloha-cart-popover .aloha-cart{height:auto;overflow:hidden}#aloha-cart-popover #aloha-cartInfo{float:left}#aloha-cart-popover #aloha-cart-details{float:right;margin-top:1em}#aloha-cart-popover .deviceContainer{width:160px;float:left;padding-right:10px;border-right:1px solid #ddd}#aloha-cart-popover li:last-child{border-right:0}#aloha-cart-popover .aloha-device-title{height:3em;overflow:hidden}#aloha-cart-popover .aloha-thumbnail-container{height:100px;margin-bottom:1em;text-align:center}#aloha-cart-popover .aloha-price-container{text-align:center}#aloha-cart-popover .aloha-thumbnail-container img{height:inherit}#aloha-cart-popover .aloha-cart{border-top:1px solid #ddd;border-bottom:1px solid #ddd}#aloha-cart-popover #aloha-cart-info{margin-right:0}#alohaBuyBoxWidget .without-contract-subheading{margin-right:0}#aloha-bb-help-nodes .aloha-bb-contract-term-heading{color:gray;font-family:arial;margin-top:.5em;text-align:center;height:.7em;border-bottom:1px solid gray;margin-bottom:1.6em}#aloha-bb-help-nodes .aloha-bb-contract-term-heading span{background-color:#fff;padding:0 10px 0 10px}#alohaAvailabilityUS_feature_div .availability a{text-decoration:none}#alohaPricingWidget a{text-decoration:none}#alohaAvailabilityUS_feature_div .availability{margin-top:-4px;margin-bottom:0}#alohaBuyBoxWidget .select-transaction-alert .a-icon-alert{top:18px;left:3px}#alohaBuyBoxWidget .select-transaction-alert .a-alert-container{padding-left:39px;width:290px}#alohaBuyBoxUS_feature_div #alohaBuyBoxWidget .contract-container .contract-term-heading a{text-decoration:none}#alohaBuyBoxUS_feature_div #alohaBuyBoxWidget .annual-contract-box .a-icon-popover{display:none}#alohaBuyBoxUS_feature_div #alohaBuyBoxWidget .contract-container .annual-contract-box{cursor:pointer;cursor:hand}#alohaBuyBoxUS_feature_div #alohaBuyBoxWidget .aloha-buybox-price{font-size:15px}#alohaBuyBoxUS_feature_div #alohaBuyBoxWidget #linkOffSection a{text-decoration:none}#alohaBuyBoxUS_feature_div .lockedUsedBuyboxContainer{padding-left:3.5%}#alohaBuyBoxUS_feature_div .alohaBuyboxUtilsNoWrap{white-space:nowrap}.hidden{display:none}.simo-no-padding{padding:0}.carrier-reviews-cell{padding-left:10px}.carrier-reviews-bordered-cell{border:1px dotted #ccc}.carrier-reviews-selected-cell{background-color:#ffd}#aloha-carrier-compatibility-modal-table-description{margin-top:10px;margin-bottom:14px}.aloha-carrier-compatibility-sortable-header.carrier{min-width:97px}.aloha-carrier-compatibility-sortable-header.compatibility{min-width:156px}.aloha-carrier-compatibility-sortable-header div{float:left}.aloha-carrier-compatibility-sortable-header i.a-icon{margin-left:10px;margin-top:4px}#aloha-carrier-compatibility-overview-table.a-bordered.a-vertical-stripes td:nth-child(2n),#aloha-carrier-compatibility-overview-table.a-bordered.a-vertical-stripes th:nth-child(2n){background-color:initial}#aloha-carrier-compatibility-modal-table.a-bordered.a-vertical-stripes td:nth-child(2n),#aloha-carrier-compatibility-modal-table.a-bordered.a-vertical-stripes th:nth-child(2n){background-color:initial}#aloha-carrier-compatibility-table.a-bordered.a-vertical-stripes th:nth-child(2n),.aloha-carrier-compatibility-table.a-bordered.a-vertical-stripes td:nth-child(2n){background-color:transparent}.aloha-carrier-compatibility-column-gray{background-color:#f6f6f6}.aloha-carrier-compatibility-modal-table-row .aloha-carrier-compatibility-tech-text,.aloha-carrier-compatibility-modal-table-row .carrier-name,.aloha-carrier-compatibility-modal-table-row .carrier-rating-summary{min-height:27px;display:inline-block;cursor:default}.aloha-carrier-compatibility-modal-table-row .aloha-carrier-compatibility-tech-text:first-line,.aloha-carrier-compatibility-modal-table-row .carrier-name:first-line,.aloha-carrier-compatibility-modal-table-row .carrier-rating-summary:first-line{line-height:27px}.aloha-carrier-compatibility-modal-table-row .aloha-carrier-compatibility-icon{margin-top:6px}.aloha-carrier-compatibility-check-icon{width:30px;height:27px;background-position:-318px -35px;background-image:url(https://images-na.ssl-images-amazon.com/images/G/01/AUIClients/AmazonUIBaseCSS-sprite_2x-8e7ef370dc28a214b3f490c9620f4ac501d5a864._V2_.png);background-repeat:no-repeat;background-size:400px 650px;display:inline-block;vertical-align:top}.aloha-carrier-compatibility-hidden{display:none}.aloha-buybox-spaced-link{margin-top:12px;margin-bottom:7px;text-align:center}.popover-tab and comes from the brand Unknown brand. It ranks 704,028 in Movies & TV ( and was released on Unknown date.\""
155
+ ]
156
+ },
157
+ {
158
+ "cell_type": "code",
159
+ "execution_count": 30,
160
+ "id": "f2adedbc870feda",
161
+ "metadata": {
162
+ "collapsed": false,
163
+ "is_executing": true
164
+ },
165
+ "outputs": [],
166
+ "source": [
167
+ "i = 0\n",
168
+ "for edge, edge_text in dict_edge.items():\n",
169
+ " node1 = edge.split(\"|\")[0]\n",
170
+ " node2 = edge.split(\"|\")[1]\n",
171
+ " node1_id = int(dict_num_to_id[node1])\n",
172
+ " node2_id = int(dict_num_to_id[node2])\n",
173
+ " edge1.append(node1_id)\n",
174
+ " edge2.append(node2_id)\n",
175
+ " text_nodes[node1_id] = \"reviewer\"\n",
176
+ " try:\n",
177
+ " text_nodes[node2_id] = dict_id_to_text[node2]\n",
178
+ " except:\n",
179
+ " text_nodes[node2_id] = \"item\"\n",
180
+ " text_edges.append(edge_text)\n",
181
+ " try:\n",
182
+ " text_node_labels[node2_id] = dictid_to_label[node2]\n",
183
+ " except:\n",
184
+ " text_node_labels[node2_id] = \"Unknown\""
185
+ ]
186
+ },
187
+ {
188
+ "cell_type": "code",
189
+ "execution_count": 31,
190
+ "id": "3305934f1a11caa7",
191
+ "metadata": {
192
+ "collapsed": false,
193
+ "is_executing": true
194
+ },
195
+ "outputs": [],
196
+ "source": [
197
+ "from torch_geometric.data import Data\n",
198
+ "import torch"
199
+ ]
200
+ },
201
+ {
202
+ "cell_type": "code",
203
+ "execution_count": 32,
204
+ "id": "5030fa8672f2b177",
205
+ "metadata": {
206
+ "collapsed": false,
207
+ "is_executing": true
208
+ },
209
+ "outputs": [],
210
+ "source": [
211
+ "edge_index = torch.tensor([edge1,edge2])"
212
+ ]
213
+ },
214
+ {
215
+ "cell_type": "code",
216
+ "execution_count": 33,
217
+ "id": "21085a8a04df7062",
218
+ "metadata": {
219
+ "collapsed": false,
220
+ "is_executing": true
221
+ },
222
+ "outputs": [],
223
+ "source": [
224
+ "new_data = Data(\n",
225
+ " edge_index=edge_index,\n",
226
+ " text_nodes=text_nodes,\n",
227
+ " text_edges=text_edges,\n",
228
+ " text_node_labels=text_node_labels,\n",
229
+ " edge_score=edge_score\n",
230
+ ")"
231
+ ]
232
+ },
233
+ {
234
+ "cell_type": "code",
235
+ "execution_count": 35,
236
+ "id": "d39601d90a0171c5",
237
+ "metadata": {
238
+ "collapsed": false,
239
+ "is_executing": true
240
+ },
241
+ "outputs": [
242
+ {
243
+ "name": "stdout",
244
+ "output_type": "stream",
245
+ "text": [
246
+ "Data saved to ./processed/movie.pkl\n"
247
+ ]
248
+ }
249
+ ],
250
+ "source": [
251
+ "import pickle\n",
252
+ "output_file_path = '../processed/movie.pkl'\n",
253
+ "with open(output_file_path, 'wb') as output_file:\n",
254
+ " pickle.dump(new_data, output_file)\n",
255
+ "\n",
256
+ "print(f\"Data saved to {output_file_path}\")"
257
+ ]
258
+ },
259
+ {
260
+ "cell_type": "code",
261
+ "execution_count": 37,
262
+ "id": "60f52e9317cfad61",
263
+ "metadata": {
264
+ "collapsed": false,
265
+ "is_executing": true
266
+ },
267
+ "outputs": [
268
+ {
269
+ "data": {
270
+ "text/plain": [
271
+ "Data(edge_index=[2, 1697533], text_nodes=[174012], text_edges=[1697533], text_node_labels=[174012], edge_score=[1697533])"
272
+ ]
273
+ },
274
+ "execution_count": 37,
275
+ "metadata": {},
276
+ "output_type": "execute_result"
277
+ }
278
+ ],
279
+ "source": [
280
+ "new_data"
281
+ ]
282
+ },
283
+ {
284
+ "cell_type": "code",
285
+ "execution_count": null,
286
+ "id": "4aaa10c4d649044a",
287
+ "metadata": {
288
+ "collapsed": false,
289
+ "is_executing": true
290
+ },
291
+ "outputs": [],
292
+ "source": []
293
+ }
294
+ ],
295
+ "metadata": {
296
+ "kernelspec": {
297
+ "display_name": "Python 3",
298
+ "language": "python",
299
+ "name": "python3"
300
+ },
301
+ "language_info": {
302
+ "codemirror_mode": {
303
+ "name": "ipython",
304
+ "version": 3
305
+ },
306
+ "file_extension": ".py",
307
+ "mimetype": "text/x-python",
308
+ "name": "python",
309
+ "nbconvert_exporter": "python",
310
+ "pygments_lexer": "ipython3",
311
+ "version": "3.10.12"
312
+ }
313
+ },
314
+ "nbformat": 4,
315
+ "nbformat_minor": 5
316
+ }
goodreads_comics/goodreads.md CHANGED
@@ -10,4 +10,7 @@ The Goodreads datasets consist of four datasets, specifically labeled as Goodrea
10
  Link prediction in the Goodreads dataset involves predicting potential connections between users and books. The goal is to predict whether a user will review a book.
11
 
12
  ### Node Classification
13
- Node classification tasks in the Goodreads dataset include predicting the book's category.
 
 
 
 
10
  Link prediction in the Goodreads dataset involves predicting potential connections between users and books. The goal is to predict whether a user will review a book.
11
 
12
  ### Node Classification
13
+ Node classification tasks in the Goodreads dataset include predicting the book's category.
14
+
15
+ ## Dataset Source
16
+ https://mengtingwan.github.io/data/goodreads.html
goodreads_crime/goodreads.md CHANGED
@@ -10,4 +10,7 @@ The Goodreads datasets consist of four datasets, specifically labeled as Goodrea
10
  Link prediction in the Goodreads dataset involves predicting potential connections between users and books. The goal is to predict whether a user will review a book.
11
 
12
  ### Node Classification
13
- Node classification tasks in the Goodreads dataset include predicting the book's category.
 
 
 
 
10
  Link prediction in the Goodreads dataset involves predicting potential connections between users and books. The goal is to predict whether a user will review a book.
11
 
12
  ### Node Classification
13
+ Node classification tasks in the Goodreads dataset include predicting the book's category.
14
+
15
+ ## Dataset Source
16
+ https://mengtingwan.github.io/data/goodreads.html
goodreads_history/emb/history_bert_base_uncased_512_cls_edge.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:342c90b06b2b1bf599dbcf55ab7725c53dc76f686bf3a1dc70db8c293cdd9921
3
+ size 3173673911
goodreads_history/emb/history_bert_base_uncased_512_cls_node.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25fcbfe6382af9c851e0a14bac9c5908f3b0d8cb3f3ad6e59f62ea47089773fe
3
+ size 830664119
goodreads_history/goodreads.md CHANGED
@@ -10,4 +10,7 @@ The Goodreads datasets consist of four datasets, specifically labeled as Goodrea
10
  Link prediction in the Goodreads dataset involves predicting potential connections between users and books. The goal is to predict whether a user will review a book.
11
 
12
  ### Node Classification
13
- Node classification tasks in the Goodreads dataset include predicting the book's category.
 
 
 
 
10
  Link prediction in the Goodreads dataset involves predicting potential connections between users and books. The goal is to predict whether a user will review a book.
11
 
12
  ### Node Classification
13
+ Node classification tasks in the Goodreads dataset include predicting the book's category.
14
+
15
+ ## Dataset Source
16
+ https://mengtingwan.github.io/data/goodreads.html
readme.md CHANGED
@@ -2,14 +2,12 @@
2
 
3
  ## Dataset Format
4
 
5
- Each dataset is a [PyG Data object](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.data.Dataset.html#torch_geometric.data.Dataset) and is stored in the `processed` subdir following a unified format, with each attribute defined as follows:
6
 
7
  - `edge_index`: Graph connectivity in COO format with shape [2, num_edges] and type `torch.long`.
8
  - `text_nodes`: `List` contains textual information for each node in the graph.
9
  - `text_edges`: `List` contains textual information for each edge in the graph.
10
- - `node_labels`: labels or classes for each node in the graph.
11
- - `edge_labels`: labels or classes for each edge in the graph and type `torch.long`.
12
-
13
 
14
  ## Embedding Data Format
15
  The embedding data is thrived from `text_nodes` and `text_edges` through PLM including:
 
2
 
3
  ## Dataset Format
4
 
5
+ Each dataset is a [PyG Data object](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.data.Dataset.html#torch_geometric.data.Dataset) and is stored in the `processed` subdir following a unified format with each attribute defined as follows:
6
 
7
  - `edge_index`: Graph connectivity in COO format with shape [2, num_edges] and type `torch.long`.
8
  - `text_nodes`: `List` contains textual information for each node in the graph.
9
  - `text_edges`: `List` contains textual information for each edge in the graph.
10
+ - `node_labels`: `List` contains text labels for each node in the graph. We use `-1` to represent nodes without labels
 
 
11
 
12
  ## Embedding Data Format
13
  The embedding data is thrived from `text_nodes` and `text_edges` through PLM including:
reddit/emb/reddit_bert_base_uncased_512_cls_edge.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23dd66473a96aede602b6308393bbb6a3a65dd4c22d56a9e77c1423c4857d816
3
+ size 393813426
reddit/emb/reddit_bert_base_uncased_512_cls_node.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74410741bf2fc68e799e4f5621fa501bd0dc5588c46e745d1a971ab43ff5a40a
3
+ size 787625394
reddit/processed/reddit.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:298b871b7db735cae25c835f868fdfb9d976f113eee22e06fb14ce03167383d2
3
+ size 68520870
reddit/raw/process_final_reddit.ipynb ADDED
@@ -0,0 +1,254 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 1,
6
+ "metadata": {},
7
+ "outputs": [],
8
+ "source": [
9
+ "import pandas as pd\n",
10
+ "import math\n",
11
+ "import pickle as pkl\n",
12
+ "\n",
13
+ "from torch_geometric.data import Data\n",
14
+ "import torch\n",
15
+ "import tqdm"
16
+ ]
17
+ },
18
+ {
19
+ "cell_type": "code",
20
+ "execution_count": 11,
21
+ "metadata": {},
22
+ "outputs": [
23
+ {
24
+ "name": "stderr",
25
+ "output_type": "stream",
26
+ "text": [
27
+ "/tmp/ipykernel_5327/3476166533.py:2: DtypeWarning: Columns (2,3,4,6,7,8,9,14,17,18,21) have mixed types. Specify dtype option on import or set low_memory=False.\n",
28
+ " df = pd.read_csv(\"reddit.csv\")\n"
29
+ ]
30
+ },
31
+ {
32
+ "name": "stdout",
33
+ "output_type": "stream",
34
+ "text": [
35
+ "(1070077, 22)\n"
36
+ ]
37
+ },
38
+ {
39
+ "name": "stderr",
40
+ "output_type": "stream",
41
+ "text": [
42
+ "/tmp/ipykernel_5327/3476166533.py:18: SettingWithCopyWarning: \n",
43
+ "A value is trying to be set on a copy of a slice from a DataFrame\n",
44
+ "\n",
45
+ "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
46
+ " df_graph.rename(\n"
47
+ ]
48
+ },
49
+ {
50
+ "name": "stdout",
51
+ "output_type": "stream",
52
+ "text": [
53
+ "(256388, 8)\n"
54
+ ]
55
+ }
56
+ ],
57
+ "source": [
58
+ "## Preprocessing\n",
59
+ "df = pd.read_csv(\"reddit.csv\")\n",
60
+ "print(df.shape)\n",
61
+ "\n",
62
+ "# select columns\n",
63
+ "df_graph = df[\n",
64
+ " [\n",
65
+ " \"subreddit_id\",\n",
66
+ " \"subreddit\",\n",
67
+ " \"name\",\n",
68
+ " \"body\",\n",
69
+ " \"score\",\n",
70
+ " \"author\",\n",
71
+ " \"author_flair_text\",\n",
72
+ " \"distinguished\",\n",
73
+ " ]\n",
74
+ "]\n",
75
+ "df_graph.rename(\n",
76
+ " columns={\n",
77
+ " \"name\": \"post_id\",\n",
78
+ " \"body\": \"post\",\n",
79
+ " \"author\": \"user\",\n",
80
+ " \"author_flair_text\": \"user_flair\",\n",
81
+ " },\n",
82
+ " inplace=True,\n",
83
+ " errors=\"raise\",\n",
84
+ ")\n",
85
+ "\n",
86
+ "# drop na, duplicates and deleted post\n",
87
+ "df_graph = df_graph.drop_duplicates()\n",
88
+ "df_graph = df_graph[df_graph[\"post\"] != \"[deleted]\"]\n",
89
+ "df_graph = df_graph.dropna(subset=[\"post_id\"])\n",
90
+ "df_graph = df_graph.dropna(subset=[\"user_flair\"])\n",
91
+ "df_graph = df_graph.dropna(subset=[\"subreddit\"])\n",
92
+ "df_graph = df_graph.dropna(subset=[\"post\"])\n",
93
+ "print(df_graph.shape)\n",
94
+ "\n",
95
+ "df_graph[\"distinguished\"] = df_graph[\"distinguished\"].apply(\n",
96
+ " lambda x: \"ordinary\" if pd.isna(x) else \"distinguished\"\n",
97
+ ")\n",
98
+ "df_graph[\"user_flair\"] = df_graph[\"user_flair\"].apply(lambda x: \"\" if pd.isna(x) else x)"
99
+ ]
100
+ },
101
+ {
102
+ "cell_type": "code",
103
+ "execution_count": 12,
104
+ "metadata": {},
105
+ "outputs": [
106
+ {
107
+ "data": {
108
+ "text/html": [
109
+ "<div>\n",
110
+ "<style scoped>\n",
111
+ " .dataframe tbody tr th:only-of-type {\n",
112
+ " vertical-align: middle;\n",
113
+ " }\n",
114
+ "\n",
115
+ " .dataframe tbody tr th {\n",
116
+ " vertical-align: top;\n",
117
+ " }\n",
118
+ "\n",
119
+ " .dataframe thead th {\n",
120
+ " text-align: right;\n",
121
+ " }\n",
122
+ "</style>\n",
123
+ "<table border=\"1\" class=\"dataframe\">\n",
124
+ " <thead>\n",
125
+ " <tr style=\"text-align: right;\">\n",
126
+ " <th></th>\n",
127
+ " <th>subreddit_id</th>\n",
128
+ " <th>subreddit</th>\n",
129
+ " <th>post_id</th>\n",
130
+ " <th>post</th>\n",
131
+ " <th>score</th>\n",
132
+ " <th>user</th>\n",
133
+ " <th>user_flair</th>\n",
134
+ " <th>distinguished</th>\n",
135
+ " </tr>\n",
136
+ " </thead>\n",
137
+ " <tbody>\n",
138
+ " <tr>\n",
139
+ " <th>3</th>\n",
140
+ " <td>t5_2qhon</td>\n",
141
+ " <td>comicbooks</td>\n",
142
+ " <td>t1_cqug9dk</td>\n",
143
+ " <td>It's not contradictory. Snyder's rendition of ...</td>\n",
144
+ " <td>1.0</td>\n",
145
+ " <td>eskimo_bros</td>\n",
146
+ " <td>Luke Cage</td>\n",
147
+ " <td>ordinary</td>\n",
148
+ " </tr>\n",
149
+ " </tbody>\n",
150
+ "</table>\n",
151
+ "</div>"
152
+ ],
153
+ "text/plain": [
154
+ " subreddit_id subreddit post_id \\\n",
155
+ "3 t5_2qhon comicbooks t1_cqug9dk \n",
156
+ "\n",
157
+ " post score user \\\n",
158
+ "3 It's not contradictory. Snyder's rendition of ... 1.0 eskimo_bros \n",
159
+ "\n",
160
+ " user_flair distinguished \n",
161
+ "3 Luke Cage ordinary "
162
+ ]
163
+ },
164
+ "execution_count": 12,
165
+ "metadata": {},
166
+ "output_type": "execute_result"
167
+ }
168
+ ],
169
+ "source": [
170
+ "df = df_graph\n",
171
+ "df.head(1)"
172
+ ]
173
+ },
174
+ {
175
+ "cell_type": "code",
176
+ "execution_count": 14,
177
+ "metadata": {},
178
+ "outputs": [
179
+ {
180
+ "name": "stderr",
181
+ "output_type": "stream",
182
+ "text": [
183
+ "256388it [00:15, 17023.98it/s]\n"
184
+ ]
185
+ }
186
+ ],
187
+ "source": [
188
+ "text_nodes = []\n",
189
+ "node_labels = []\n",
190
+ "sub_id2idx = {}\n",
191
+ "sub_nodes = []\n",
192
+ "user_id2idx = {} \n",
193
+ "user_nodes = []\n",
194
+ "count = 0\n",
195
+ "text_edges = []\n",
196
+ "for _, row in tqdm.tqdm(df.iterrows()):\n",
197
+ " sub_id = str(row[\"subreddit\"])\n",
198
+ " user_id = str(row['user'])\n",
199
+ "\n",
200
+ " if sub_id not in sub_id2idx:\n",
201
+ " sub_id2idx[sub_id] = count\n",
202
+ " sub_nodes.append(count)\n",
203
+ " count += 1\n",
204
+ " else:\n",
205
+ " sub_nodes.append(sub_id2idx[sub_id])\n",
206
+ " text_nodes.append(f\"subreddit {sub_id}\")\n",
207
+ " node_labels.append(-1)\n",
208
+ "\n",
209
+ " if user_id not in user_id2idx:\n",
210
+ " user_id2idx[user_id] = count\n",
211
+ " user_nodes.append(count)\n",
212
+ " count += 1\n",
213
+ " else:\n",
214
+ " user_nodes.append(user_id2idx[user_id])\n",
215
+ " text_nodes.append(f\"user {user_id} has flair {row['user_flair']}\")\n",
216
+ " node_labels.append(row['distinguished'])\n",
217
+ " text_edges.append(str(row['post']))\n",
218
+ " \n",
219
+ "\n",
220
+ "## Save it as torch data\n",
221
+ "graph = Data(\n",
222
+ " text_nodes=text_nodes,\n",
223
+ " text_edges=text_edges,\n",
224
+ " node_labels=node_labels,\n",
225
+ " edge_index=torch.tensor([user_nodes, sub_nodes], dtype=torch.long),\n",
226
+ ")\n",
227
+ "\n",
228
+ "with open(\"../processed/reddit.pkl\", \"wb\") as file:\n",
229
+ " pkl.dump(graph, file)"
230
+ ]
231
+ }
232
+ ],
233
+ "metadata": {
234
+ "kernelspec": {
235
+ "display_name": "Python 3",
236
+ "language": "python",
237
+ "name": "python3"
238
+ },
239
+ "language_info": {
240
+ "codemirror_mode": {
241
+ "name": "ipython",
242
+ "version": 3
243
+ },
244
+ "file_extension": ".py",
245
+ "mimetype": "text/x-python",
246
+ "name": "python",
247
+ "nbconvert_exporter": "python",
248
+ "pygments_lexer": "ipython3",
249
+ "version": "3.10.12"
250
+ }
251
+ },
252
+ "nbformat": 4,
253
+ "nbformat_minor": 2
254
+ }
reddit/raw/reddit.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15d2f9b256702a573a9b185d214f60034a7a996a8b9a8d179eaf430650915efa
3
+ size 296939093
reddit/reddit.md ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Reddit Datasets
2
+
3
+ ## Dataset Description
4
+ The Reddit dataset is a social network. It includes information about subreddits and users. Nodes represent subreddits and users. Text on a subreddits node is the description of the subreddits. Text on a user node including following information: `user [user] has flair ['user_flair']`. The subreddits text includes the description of subreddits. Edges represent relationships between subreddits and users. Text on an edge means a user's post of a subreddits.
5
+
6
+
7
+ ## Graph Machine Learning Tasks
8
+
9
+ ### Link Prediction
10
+ Link prediction in the Reddit dataset involves predicting potential connections between users and subreddits. The goal is to predict whether a user will post a subreddit.
11
+
12
+ ### Node Classification
13
+ Node classification tasks in the Reddit dataset include predicting the users' category.
14
+
15
+ ## Dataset Source
16
+ https://www.reddit.com/r/datasets/comments/3bxlg7/i_have_every_publicly_available_reddit_comment/
twitter/emb/tweets_bert_base_uncased_512_cls_edge.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:050b7c8b64cd1fa509b1c31f1ae1b13f13a298e3726306a0f2bd808cdb0fe14a
3
+ size 114688434
twitter/emb/tweets_bert_base_uncased_512_cls_node.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b874ce830fbf92010bb2bf6617a0519d79ac407798f9bcbd28c81e3e003e167
3
+ size 93367154
twitter/raw/68841_tweets_multiclasses_filtered_0722_part1.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fdc595c36f74073feeb9dea9af01a467dd64743ceec15442085d8c3f2f187339
3
+ size 20623408
twitter/raw/68841_tweets_multiclasses_filtered_0722_part2.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c41a4accb55fc90e9399941a95845cf17e8a072e2936ce5c2cb495e79713bea
3
+ size 21953352
twitter/raw/process_final_twitter.ipynb ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 13,
6
+ "metadata": {},
7
+ "outputs": [],
8
+ "source": [
9
+ "import numpy as np\n",
10
+ "import pandas as pd\n",
11
+ "import torch\n",
12
+ "from torch_geometric.data import Data\n",
13
+ "import tqdm\n",
14
+ "import pickle\n",
15
+ "from torch_sparse import SparseTensor"
16
+ ]
17
+ },
18
+ {
19
+ "cell_type": "code",
20
+ "execution_count": 2,
21
+ "metadata": {},
22
+ "outputs": [],
23
+ "source": [
24
+ "p1 = np.load(\"68841_tweets_multiclasses_filtered_0722_part2.npy\", allow_pickle=True)\n",
25
+ "p2 = np.load(\"68841_tweets_multiclasses_filtered_0722_part2.npy\", allow_pickle=True)\n",
26
+ "g = np.concatenate((p1, p2), axis=0)\n",
27
+ "df = pd.DataFrame(data=g, columns=[\"event_id\", \"tweet_id\", \"text\", \"user_id\", \"created_at\", \"user_loc\",\n",
28
+ "\t\t\t\"place_type\", \"place_full_name\", \"place_country_code\", \"hashtags\", \"user_mentions\", \"image_urls\", \"entities\",\n",
29
+ "\t\t\t\"words\", \"filtered_words\", \"sampled_words\"])"
30
+ ]
31
+ },
32
+ {
33
+ "cell_type": "code",
34
+ "execution_count": 5,
35
+ "metadata": {},
36
+ "outputs": [
37
+ {
38
+ "name": "stderr",
39
+ "output_type": "stream",
40
+ "text": [
41
+ "67682it [00:04, 15665.18it/s]\n"
42
+ ]
43
+ }
44
+ ],
45
+ "source": [
46
+ "tweet_id2idx = {}\n",
47
+ "user_id2idx = {}\n",
48
+ "user = []\n",
49
+ "tweet = []\n",
50
+ "text_edges = []\n",
51
+ "text_nodes = [-1] * len(df) * 20\n",
52
+ "count = 0\n",
53
+ "\n",
54
+ "# Use df instead of g for iteration\n",
55
+ "for _, row in tqdm.tqdm(df.iterrows()):\n",
56
+ " # Convert tweet_id and user_id to string to ensure consistency\n",
57
+ " tweet_id = str(row['tweet_id'])\n",
58
+ " user_id = str(row['user_id'])\n",
59
+ " \n",
60
+ " if tweet_id not in tweet_id2idx:\n",
61
+ " tweet_id2idx[tweet_id] = count\n",
62
+ " tweet.append(count)\n",
63
+ " count += 1\n",
64
+ " else:\n",
65
+ " tweet.append(tweet_id2idx[tweet_id])\n",
66
+ " text_nodes[tweet_id2idx[tweet_id]] = f\"tweet{tweet_id2idx[tweet_id]} of event{row['event_id']}\"\n",
67
+ " \n",
68
+ " if user_id not in user_id2idx:\n",
69
+ " user_id2idx[user_id] = count\n",
70
+ " user.append(count)\n",
71
+ " count += 1\n",
72
+ " else:\n",
73
+ " user.append(user_id2idx[user_id])\n",
74
+ " text_nodes[user_id2idx[user_id]] = f\"user\"\n",
75
+ " \n",
76
+ " text_edges.append(row['text'])\n",
77
+ " \n",
78
+ " for mention in row['user_mentions']:\n",
79
+ " if mention not in user_id2idx:\n",
80
+ " user_id2idx[mention] = count\n",
81
+ " user.append(count)\n",
82
+ " count += 1\n",
83
+ " else:\n",
84
+ " user.append(user_id2idx[mention])\n",
85
+ " tweet.append(tweet_id2idx[tweet_id])\n",
86
+ " text_nodes[user_id2idx[mention]] = f\"mentioned user\"\n",
87
+ " text_edges.append(row['text'])"
88
+ ]
89
+ },
90
+ {
91
+ "cell_type": "code",
92
+ "execution_count": 8,
93
+ "metadata": {},
94
+ "outputs": [],
95
+ "source": [
96
+ "text_nodes = text_nodes[:count]"
97
+ ]
98
+ },
99
+ {
100
+ "cell_type": "code",
101
+ "execution_count": 10,
102
+ "metadata": {},
103
+ "outputs": [],
104
+ "source": [
105
+ "edge_index = [user, tweet]\n",
106
+ "graph = Data(\n",
107
+ "\t\t\ttext_nodes=text_nodes,\n",
108
+ "\t\t\ttext_edges=text_edges,\n",
109
+ "\t\t\tedge_index=torch.tensor(edge_index, dtype=torch.long)\n",
110
+ "\t\t)"
111
+ ]
112
+ },
113
+ {
114
+ "cell_type": "code",
115
+ "execution_count": 21,
116
+ "metadata": {},
117
+ "outputs": [],
118
+ "source": [
119
+ "with open('../processed/twitter.pkl', 'wb') as f:\n",
120
+ " pickle.dump(graph, f)"
121
+ ]
122
+ },
123
+ {
124
+ "cell_type": "code",
125
+ "execution_count": null,
126
+ "metadata": {},
127
+ "outputs": [],
128
+ "source": []
129
+ }
130
+ ],
131
+ "metadata": {
132
+ "kernelspec": {
133
+ "display_name": "Python 3",
134
+ "language": "python",
135
+ "name": "python3"
136
+ },
137
+ "language_info": {
138
+ "codemirror_mode": {
139
+ "name": "ipython",
140
+ "version": 3
141
+ },
142
+ "file_extension": ".py",
143
+ "mimetype": "text/x-python",
144
+ "name": "python",
145
+ "nbconvert_exporter": "python",
146
+ "pygments_lexer": "ipython3",
147
+ "version": "3.10.12"
148
+ }
149
+ },
150
+ "nbformat": 4,
151
+ "nbformat_minor": 2
152
+ }
twitter/twitter.md CHANGED
@@ -1,10 +1,14 @@
1
  # Twitter Datasets
2
 
3
  ## Dataset Description
4
- The twitter dataset is a social network. Nodes represent tweets and users. Text on nodes is the description of the tweets or users. The Edge between a use and a tweet means that the user posts the tweet. Text on edges is contents of the tweets.
5
 
6
 
7
  ## Graph Machine Learning Tasks
8
 
9
  ### Link Prediction
10
  Link prediction in the tweet dataset involves predicting potential connections between tweets and users. The goal is to predict whether a user will post a tweet.
 
 
 
 
 
1
  # Twitter Datasets
2
 
3
  ## Dataset Description
4
+ The twitter dataset is a social network. Nodes represent tweets and users. Text on nodes is the description of the tweets or the users. Edge between a and tweet means that the user posts the tweet. Text on edges is contents of the tweets.
5
 
6
 
7
  ## Graph Machine Learning Tasks
8
 
9
  ### Link Prediction
10
  Link prediction in the tweet dataset involves predicting potential connections between tweets and users. The goal is to predict whether a user will post a tweet.
11
+
12
+
13
+ ## Dataset Source
14
+ https://dl.acm.org/doi/10.1145/2505515.2505695