File size: 4,979 Bytes
11c9345
 
80548d2
11c9345
1cd8271
7c59a35
 
 
 
 
1cd8271
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5ad864e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1cd8271
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5ad864e
 
 
1cd8271
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
---
license: cc-by-nc-4.0
viewer: false
---
# Baidu ULTR Dataset - Baidu BERT-12l-12h
Query-document vectors and clicks for a subset of the [Baidu Unbiased Learning to Rank
dataset](https://arxiv.org/abs/2207.03051).
This dataset uses the BERT cross-encoder with 12 layers from Baidu released in the [official starter-kit](https://github.com/ChuXiaokai/baidu_ultr_dataset/) to compute query-document vectors (768 dims).


## Setup
1. Install huggingface [datasets](https://huggingface.co/docs/datasets/installation)
2. Install [pandas](https://github.com/pandas-dev/pandas) and [pyarrow](https://arrow.apache.org/docs/python/index.html): `pip install pandas pyarrow`
3. Optionally, you might need to install a [pyarrow-hotfix](https://github.com/pitrou/pyarrow-hotfix) if you cannot install `pyarrow >= 14.0.1` 
4. You can now use the dataset as described below.

## Load train / test click dataset:
```Python
from datasets import load_dataset

dataset = load_dataset(
    "philipphager/baidu-ultr_baidu-mlm-ctr",
    name="clicks",
    split="train", # ["train", "test"]
    cache_dir="~/.cache/huggingface",
)

dataset.set_format("torch") #  [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
```

## Load expert annotations:
```Python
from datasets import load_dataset

dataset = load_dataset(
    "philipphager/baidu-ultr_baidu-mlm-ctr",
    name="annotations",
    split="test",
    cache_dir="~/.cache/huggingface",
)

dataset.set_format("torch") #  [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
```

## Available features
Each row of the click / annotation dataset contains the following attributes. Use a custom `collate_fn` to select specific features (see below):

### Click dataset
| name                         | dtype          | description |
|------------------------------|----------------|-------------|
| query_id                     | string         | Baidu query_id |
| query_md5                    | string         | MD5 hash of query text |
| url_md5                      | List[string]   | MD5 hash of document url, most reliable document identifier |
| text_md5                     | List[string]   | MD5 hash of document title and abstract |
| query_document_embedding     | Tensor[float16]| BERT CLS token |
| click                        | Tensor[int32]  | Click / no click on a document |
| n                            | int32          | Number of documents for current query, useful for padding |
| position                     | Tensor[int32]  | Position in ranking (does not always match original item position) |
| media_type                   | Tensor[int32]  | Document type (label encoding recommended as ids do not occupy a continous integer range) |
| displayed_time               | Tensor[float32]| Seconds a document was displayed on screen |
| serp_height                  | Tensor[int32]  | Pixel height of a document on screen |
| slipoff_count_after_click    | Tensor[int32]  | Number of times a document was scrolled off screen after previously clicking on it |


### Expert annotation dataset
| name                         | dtype          | description |
|------------------------------|----------------|-------------|
| query_id                     | string         | Baidu query_id |
| query_md5                    | string         | MD5 hash of query text |
| text_md5                     | List[string]   | MD5 hash of document title and abstract |
| query_document_embedding     | Tensor[float16]| BERT CLS token |
| label                        | Tensor[int32]  | Relevance judgment on a scale from 0 (bad) to 4 (excellent) |
| n                            | int32          | Number of documents for current query, useful for padding |
| frequency_bucket             | int32          | Monthly frequency of query (bucket) from 0 (high frequency) to 9 (low frequency) |

## Example PyTorch collate function
Each sample in the dataset is a single query with multiple documents.
The following example demonstrates how to create a batch containing multiple queries with varying numbers of documents by applying padding: 

```Python
import torch
from typing import List
from collections import defaultdict
from torch.nn.utils.rnn import pad_sequence
from torch.utils.data import DataLoader


def collate_clicks(samples: List):
    batch = defaultdict(lambda: [])

    for sample in samples:
        batch["query_document_embedding"].append(sample["query_document_embedding"])
        batch["position"].append(sample["position"])
        batch["click"].append(sample["click"])
        batch["n"].append(sample["n"])

    return {
        "query_document_embedding": pad_sequence(
            batch["query_document_embedding"], batch_first=True
        ),
        "position": pad_sequence(batch["position"], batch_first=True),
        "click": pad_sequence(batch["click"], batch_first=True),
        "n": torch.tensor(batch["n"]),
    }

loader = DataLoader(dataset, collate_fn=collate_clicks, batch_size=16)
```