Philipp Hager commited on
Commit
7c76967
2 Parent(s): 75b2de7 7c59a35

Merge branch 'main' of hf.co:datasets/philipphager/baidu-ultr_baidu-mlm-ctr

Browse files
Files changed (2) hide show
  1. README.md +104 -0
  2. baidu-ultr_baidu-mlm-ctr.py +221 -0
README.md CHANGED
@@ -1,3 +1,107 @@
1
  ---
2
  license: cc-by-nc-4.0
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-4.0
3
+ viewer: false
4
  ---
5
+ # Baidu ULTR Dataset - Baidu BERT-12l-12h
6
+ Query-document vectors and clicks for a subset of the [Baidu Unbiased Learning to Rank
7
+ dataset](https://arxiv.org/abs/2207.03051).
8
+ This dataset uses the BERT cross-encoder with 12 layers from Baidu released in the [official starter-kit](https://github.com/ChuXiaokai/baidu_ultr_dataset/) to compute query-document vectors (768 dims).
9
+
10
+
11
+ ## Setup
12
+ 1. Install huggingface [datasets](https://huggingface.co/docs/datasets/installation)
13
+ 2. Install [pandas](https://github.com/pandas-dev/pandas) and [pyarrow](https://arrow.apache.org/docs/python/index.html): `pip install pandas pyarrow`
14
+ 3. Optionally, you might need to install a [pyarrow-hotfix](https://github.com/pitrou/pyarrow-hotfix) if you cannot install `pyarrow >= 14.0.1`
15
+ 4. You can now use the dataset as described below.
16
+
17
+ ## Load train / test click dataset:
18
+ ```Python
19
+ from datasets import load_dataset
20
+
21
+ dataset = load_dataset(
22
+ "philipphager/baidu-ultr_baidu-mlm-ctr",
23
+ name="clicks",
24
+ split="train", # ["train", "test"]
25
+ cache_dir="~/.cache/huggingface",
26
+ )
27
+
28
+ dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
29
+ ```
30
+
31
+ ## Load expert annotations:
32
+ ```Python
33
+ from datasets import load_dataset
34
+
35
+ dataset = load_dataset(
36
+ "philipphager/baidu-ultr_baidu-mlm-ctr",
37
+ name="annotations",
38
+ split="test",
39
+ cache_dir="~/.cache/huggingface",
40
+ )
41
+
42
+ dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
43
+ ```
44
+
45
+ ## Available features
46
+ Each row of the click / annotation dataset contains the following attributes. Use a custom `collate_fn` to select specific features (see below):
47
+
48
+ ### Click dataset
49
+ | name | dtype | description |
50
+ |------------------------------|----------------|-------------|
51
+ | query_id | string | Baidu query_id |
52
+ | query_md5 | string | MD5 hash of query text |
53
+ | url_md5 | List[string] | MD5 hash of document url, most reliable document identifier |
54
+ | text_md5 | List[string] | MD5 hash of document title and abstract |
55
+ | query_document_embedding | Tensor[float16]| BERT CLS token |
56
+ | click | Tensor[int32] | Click / no click on a document |
57
+ | n | int32 | Number of documents for current query, useful for padding |
58
+ | position | Tensor[int32] | Position in ranking (does not always match original item position) |
59
+ | media_type | Tensor[int32] | Document type (label encoding recommended as ids do not occupy a continous integer range) |
60
+ | displayed_time | Tensor[float32]| Seconds a document was displayed on screen |
61
+ | serp_height | Tensor[int32] | Pixel height of a document on screen |
62
+ | slipoff_count_after_click | Tensor[int32] | Number of times a document was scrolled off screen after previously clicking on it |
63
+
64
+
65
+ ### Expert annotation dataset
66
+ | name | dtype | description |
67
+ |------------------------------|----------------|-------------|
68
+ | query_id | string | Baidu query_id |
69
+ | query_md5 | string | MD5 hash of query text |
70
+ | text_md5 | List[string] | MD5 hash of document title and abstract |
71
+ | query_document_embedding | Tensor[float16]| BERT CLS token |
72
+ | label | Tensor[int32] | Relevance judgment on a scale from 0 (bad) to 4 (excellent) |
73
+ | n | int32 | Number of documents for current query, useful for padding |
74
+ | frequency_bucket | int32 | Monthly frequency of query (bucket) from 0 (high frequency) to 9 (low frequency) |
75
+
76
+ ## Example PyTorch collate function
77
+ Each sample in the dataset is a single query with multiple documents.
78
+ The following example demonstrates how to create a batch containing multiple queries with varying numbers of documents by applying padding:
79
+
80
+ ```Python
81
+ import torch
82
+ from typing import List
83
+ from collections import defaultdict
84
+ from torch.nn.utils.rnn import pad_sequence
85
+ from torch.utils.data import DataLoader
86
+
87
+
88
+ def collate_clicks(samples: List):
89
+ batch = defaultdict(lambda: [])
90
+
91
+ for sample in samples:
92
+ batch["query_document_embedding"].append(sample["query_document_embedding"])
93
+ batch["position"].append(sample["position"])
94
+ batch["click"].append(sample["click"])
95
+ batch["n"].append(sample["n"])
96
+
97
+ return {
98
+ "query_document_embedding": pad_sequence(
99
+ batch["query_document_embedding"], batch_first=True
100
+ ),
101
+ "position": pad_sequence(batch["position"], batch_first=True),
102
+ "click": pad_sequence(batch["click"], batch_first=True),
103
+ "n": torch.tensor(batch["n"]),
104
+ }
105
+
106
+ loader = DataLoader(dataset, collate_fn=collate_clicks, batch_size=16)
107
+ ```
baidu-ultr_baidu-mlm-ctr.py ADDED
@@ -0,0 +1,221 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from enum import Enum
2
+ from typing import List
3
+
4
+ import datasets
5
+ import pandas as pd
6
+
7
+ from datasets import Features, Value, Array2D, Sequence, SplitGenerator, Split
8
+
9
+
10
+ _CITATION = """\
11
+ @InProceedings{huggingface:dataset,
12
+ title = {philipphager/baidu-ultr_baidu-mlm-ctr},
13
+ author={Philipp Hager, Romain Deffayet},
14
+ year={2023}
15
+ }
16
+ """
17
+
18
+ _DESCRIPTION = """\
19
+ Query-document vectors and clicks for a subset of the Baidu Unbiased Learning to Rank
20
+ dataset: https://arxiv.org/abs/2207.03051
21
+
22
+ This dataset uses the BERT cross-encoder with 12 layers from Baidu released
23
+ in the official starter-kit to compute query-document vectors (768 dims):
24
+ https://github.com/ChuXiaokai/baidu_ultr_dataset/
25
+
26
+ We link the model checkpoint also under `model/`.
27
+ """
28
+
29
+ _HOMEPAGE = "https://huggingface.co/datasets/philipphager/baidu-ultr_baidu-mlm-ctr/"
30
+ _LICENSE = "cc-by-nc-4.0"
31
+ _VERSION = "0.1.0"
32
+
33
+
34
+ class Config(str, Enum):
35
+ ANNOTATIONS = "annotations"
36
+ CLICKS = "clicks"
37
+
38
+
39
+ class BaiduUltrBuilder(datasets.GeneratorBasedBuilder):
40
+ VERSION = datasets.Version(_VERSION)
41
+ BUILDER_CONFIGS = [
42
+ datasets.BuilderConfig(
43
+ name=Config.CLICKS,
44
+ version=VERSION,
45
+ description="Load train/val/test clicks from the Baidu ULTR dataset",
46
+ ),
47
+ datasets.BuilderConfig(
48
+ name=Config.ANNOTATIONS,
49
+ version=VERSION,
50
+ description="Load expert annotations from the Baidu ULTR dataset",
51
+ ),
52
+ ]
53
+
54
+ CLICK_FEATURES = Features(
55
+ {
56
+ "query_id": Value("string"),
57
+ "query_md5": Value("string"),
58
+ "url_md5": Sequence(Value("string")),
59
+ "text_md5": Sequence(Value("string")),
60
+ "query_document_embedding": Array2D((None, 768), "float16"),
61
+ "click": Sequence(Value("int32")),
62
+ "n": Value("int32"),
63
+ "position": Sequence(Value("int32")),
64
+ "media_type": Sequence(Value("int32")),
65
+ "displayed_time": Sequence(Value("float32")),
66
+ "serp_height": Sequence(Value("int32")),
67
+ "slipoff_count_after_click": Sequence(Value("int32")),
68
+ }
69
+ )
70
+
71
+ ANNOTATION_FEATURES = Features(
72
+ {
73
+ "query_id": Value("string"),
74
+ "query_md5": Value("string"),
75
+ "text_md5": Value("string"),
76
+ "query_document_embedding": Array2D((None, 768), "float16"),
77
+ "label": Sequence(Value("int32")),
78
+ "n": Value("int32"),
79
+ "frequency_bucket": Value("int32"),
80
+ }
81
+ )
82
+
83
+ DEFAULT_CONFIG_NAME = Config.CLICKS
84
+
85
+ def _info(self):
86
+ if self.config.name == Config.CLICKS:
87
+ features = self.CLICK_FEATURES
88
+ elif self.config.name == Config.ANNOTATIONS:
89
+ features = self.ANNOTATION_FEATURES
90
+ else:
91
+ raise ValueError(
92
+ f"Config {self.config.name} must be in ['clicks', 'annotations']"
93
+ )
94
+
95
+ return datasets.DatasetInfo(
96
+ description=_DESCRIPTION,
97
+ features=features,
98
+ homepage=_HOMEPAGE,
99
+ license=_LICENSE,
100
+ citation=_CITATION,
101
+ )
102
+
103
+ def _split_generators(self, dl_manager):
104
+ if self.config.name == Config.CLICKS:
105
+ train_files = self.download_clicks(dl_manager, parts=[1, 2, 3])
106
+ test_files = self.download_clicks(dl_manager, parts=[0])
107
+
108
+ query_columns = [
109
+ "query_id",
110
+ "query_md5",
111
+ ]
112
+
113
+ agg_columns = [
114
+ "query_md5",
115
+ "url_md5",
116
+ "text_md5",
117
+ "position",
118
+ "click",
119
+ "query_document_embedding",
120
+ "media_type",
121
+ "displayed_time",
122
+ "serp_height",
123
+ "slipoff_count_after_click",
124
+ ]
125
+
126
+ return [
127
+ SplitGenerator(
128
+ name=Split.TRAIN,
129
+ gen_kwargs={
130
+ "files": train_files,
131
+ "query_columns": query_columns,
132
+ "agg_columns": agg_columns,
133
+ },
134
+ ),
135
+ SplitGenerator(
136
+ name=Split.TEST,
137
+ gen_kwargs={
138
+ "files": test_files,
139
+ "query_columns": query_columns,
140
+ "agg_columns": agg_columns,
141
+ },
142
+ ),
143
+ ]
144
+ elif self.config.name == Config.ANNOTATIONS:
145
+ test_files = dl_manager.download(["parts/validation.feather"])
146
+ query_columns = [
147
+ "query_id",
148
+ "query_md5",
149
+ "frequency_bucket",
150
+ ]
151
+ agg_columns = [
152
+ "text_md5",
153
+ "label",
154
+ "query_document_embedding",
155
+ ]
156
+
157
+ return [
158
+ SplitGenerator(
159
+ name=Split.TEST,
160
+ gen_kwargs={
161
+ "files": test_files,
162
+ "query_columns": query_columns,
163
+ "agg_columns": agg_columns,
164
+ },
165
+ )
166
+ ]
167
+ else:
168
+ raise ValueError("Config name must be in ['clicks', 'annotations']")
169
+
170
+ def download_clicks(self, dl_manager, parts: List[int], splits_per_part: int = 10):
171
+ urls = [
172
+ f"parts/part-{p}_split-{s}.feather"
173
+ for p in parts
174
+ for s in range(splits_per_part)
175
+ ]
176
+
177
+ return dl_manager.download(urls)
178
+
179
+ def _generate_examples(
180
+ self,
181
+ files: List[str],
182
+ query_columns: List[str],
183
+ agg_columns: List[str],
184
+ ):
185
+ """
186
+ Reads dataset partitions and aggregates document features per query.
187
+ :param files: List of .feather files to load from disk.
188
+ :param query_columns: Columns with one value per query. E.g., query_id,
189
+ frequency bucket, etc.
190
+ :param agg_columns: Columns with one value per document that should be
191
+ aggregated per query. E.g., click, position, query_document_embeddings, etc.
192
+ :return:
193
+ """
194
+ for file in files:
195
+ df = pd.read_feather(file)
196
+ current_query_id = None
197
+ sample_key = None
198
+ sample = None
199
+
200
+ for i in range(len(df)):
201
+ row = df.iloc[i]
202
+
203
+ if current_query_id != row["query_id"]:
204
+ if current_query_id is not None:
205
+ yield sample_key, sample
206
+
207
+ current_query_id = row["query_id"]
208
+ sample_key = f"{file}-{current_query_id}"
209
+ sample = {"n": 0}
210
+
211
+ for column in query_columns:
212
+ sample[column] = row[column]
213
+ for column in agg_columns:
214
+ sample[column] = []
215
+
216
+ for column in agg_columns:
217
+ sample[column].append(row[column])
218
+
219
+ sample["n"] += 1
220
+
221
+ yield sample_key, sample