philipphager commited on
Commit
e9faeb0
1 Parent(s): 7cd8fa6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +102 -0
README.md CHANGED
@@ -1,3 +1,105 @@
1
  ---
2
  license: cc-by-nc-4.0
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-4.0
3
+ viewer: false
4
  ---
5
+ # Baidu ULTR Dataset - UvA BERT-12l-12h
6
+ Query-document vectors and clicks for a subset of the [Baidu Unbiased Learning to Rank
7
+ dataset](https://arxiv.org/abs/2207.03051).
8
+ This dataset uses a BERT cross-encoder with 12 layers trained on a Masked Language Modeling (MLM) and click-through-rate (CTR) prediction task to compute query-document vectors (768 dims).
9
+ The model is available under `model/`.
10
+
11
+ ## Setup
12
+ 1. Install huggingface [datasets](https://huggingface.co/docs/datasets/installation)
13
+ 2. Install [pandas](https://github.com/pandas-dev/pandas) and [pyarrow](https://arrow.apache.org/docs/python/index.html): `pip install pandas pyarrow`
14
+ 3. Optionally, you might need to install a [pyarrow-hotfix](https://github.com/pitrou/pyarrow-hotfix) if you cannot install `pyarrow >= 14.0.1`
15
+ 4. You can now use the dataset as described below.
16
+
17
+ ## Load train / test click dataset:
18
+ ```Python
19
+ from datasets import load_dataset
20
+
21
+ dataset = load_dataset(
22
+ "philipphager/baidu-ultr_uva-mlm-ctr",
23
+ name="clicks",
24
+ split="train", # ["train", "test"]
25
+ cache_dir="~/.cache/huggingface",
26
+ )
27
+
28
+ dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
29
+ ```
30
+
31
+ ## Load expert annotations:
32
+ ```Python
33
+ from datasets import load_dataset
34
+
35
+ dataset = load_dataset(
36
+ "philipphager/baidu-ultr_uva-mlm-ctr",
37
+ name="annotations",
38
+ split="test",
39
+ cache_dir="~/.cache/huggingface",
40
+ )
41
+
42
+ dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
43
+ ```
44
+
45
+ ## Available features
46
+ Each row of the click / annotation dataset contains the following attributes. Use a custom `collate_fn` to select specific features (see below):
47
+
48
+ ### Click dataset
49
+ | name | dtype | description |
50
+ |------------------------------|----------------|-------------|
51
+ | query_id | string | Baidu query_id |
52
+ | query_md5 | string | MD5 hash of query text |
53
+ | url_md5 | List[string] | MD5 hash of document url, most reliable document identifier |
54
+ | text_md5 | List[string] | MD5 hash of document title and abstract |
55
+ | query_document_embedding | Tensor[float16]| BERT CLS token |
56
+ | click | Tensor[int32] | Click / no click on a document |
57
+ | n | int32 | Number of documents for current query, useful for padding |
58
+ | position | Tensor[int32] | Position in ranking (does not always match original item position) |
59
+ | media_type | Tensor[int32] | Document type (label encoding recommended as ids do not occupy a continous integer range) |
60
+ | displayed_time | Tensor[float32]| Seconds a document was displayed on screen |
61
+ | serp_height | Tensor[int32] | Pixel height of a document on screen |
62
+ | slipoff_count_after_click | Tensor[int32] | Number of times a document was scrolled off screen after previously clicking on it |
63
+
64
+
65
+ ### Expert annotation dataset
66
+ | name | dtype | description |
67
+ |------------------------------|----------------|-------------|
68
+ | query_id | string | Baidu query_id |
69
+ | query_md5 | string | MD5 hash of query text |
70
+ | text_md5 | List[string] | MD5 hash of document title and abstract |
71
+ | query_document_embedding | Tensor[float16]| BERT CLS token |
72
+ | label | Tensor[int32] | Relevance judgment on a scale from 0 (bad) to 4 (excellent) |
73
+ | n | int32 | Number of documents for current query, useful for padding |
74
+ | frequency_bucket | int32 | Monthly frequency of query (bucket) from 0 (high frequency) to 9 (low frequency) |
75
+
76
+ ## Example PyTorch collate function
77
+ Each sample in the dataset is a single query with multiple documents.
78
+ The following example demonstrates how to create a batch containing multiple queries with varying numbers of documents by applying padding:
79
+
80
+ ```Python
81
+ import torch
82
+ from typing import List
83
+ from collections import defaultdict
84
+ from torch.nn.utils.rnn import pad_sequence
85
+ from torch.utils.data import DataLoader
86
+
87
+
88
+ def collate_clicks(samples: List):
89
+ batch = defaultdict(lambda: [])
90
+
91
+ for sample in samples:
92
+ batch["query_document_embedding"].append(sample["query_document_embedding"])
93
+ batch["position"].append(sample["position"])
94
+ batch["click"].append(sample["click"])
95
+ batch["n"].append(sample["n"])
96
+
97
+ return {
98
+ "query_document_embedding": pad_sequence(batch["query_document_embedding"], batch_first=True),
99
+ "position": pad_sequence(batch["position"], batch_first=True),
100
+ "click": pad_sequence(batch["click"], batch_first=True),
101
+ "n": torch.tensor(batch["n"]),
102
+ }
103
+
104
+ loader = DataLoader(dataset, collate_fn=collate_clicks, batch_size=16)
105
+ ```